00:00:00.001 Started by upstream project "autotest-per-patch" build number 132416 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.061 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.062 The recommended git tool is: git 00:00:00.062 using credential 00000000-0000-0000-0000-000000000002 00:00:00.065 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.097 Fetching changes from the remote Git repository 00:00:00.099 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.139 Using shallow fetch with depth 1 00:00:00.139 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.140 > git --version # timeout=10 00:00:00.189 > git --version # 'git version 2.39.2' 00:00:00.189 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.229 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.229 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.133 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.151 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.165 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.165 > git config core.sparsecheckout # timeout=10 00:00:04.180 > git read-tree -mu HEAD # timeout=10 00:00:04.196 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.220 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.220 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.325 [Pipeline] Start of Pipeline 00:00:04.340 [Pipeline] library 00:00:04.342 Loading library shm_lib@master 00:00:07.673 Library shm_lib@master is cached. Copying from home. 00:00:07.749 [Pipeline] node 00:00:22.801 Still waiting to schedule task 00:00:22.801 Waiting for next available executor on ‘vagrant-vm-host’ 00:01:01.232 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:01.234 [Pipeline] { 00:01:01.240 [Pipeline] catchError 00:01:01.241 [Pipeline] { 00:01:01.251 [Pipeline] wrap 00:01:01.258 [Pipeline] { 00:01:01.264 [Pipeline] stage 00:01:01.266 [Pipeline] { (Prologue) 00:01:01.281 [Pipeline] echo 00:01:01.283 Node: VM-host-SM16 00:01:01.289 [Pipeline] cleanWs 00:01:01.296 [WS-CLEANUP] Deleting project workspace... 00:01:01.296 [WS-CLEANUP] Deferred wipeout is used... 00:01:01.301 [WS-CLEANUP] done 00:01:01.497 [Pipeline] setCustomBuildProperty 00:01:01.616 [Pipeline] httpRequest 00:01:01.977 [Pipeline] echo 00:01:01.979 Sorcerer 10.211.164.20 is alive 00:01:01.992 [Pipeline] retry 00:01:01.994 [Pipeline] { 00:01:02.007 [Pipeline] httpRequest 00:01:02.012 HttpMethod: GET 00:01:02.012 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:02.013 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:02.014 Response Code: HTTP/1.1 200 OK 00:01:02.015 Success: Status code 200 is in the accepted range: 200,404 00:01:02.015 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:02.448 [Pipeline] } 00:01:02.466 [Pipeline] // retry 00:01:02.474 [Pipeline] sh 00:01:02.753 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:01:02.771 [Pipeline] httpRequest 00:01:03.212 [Pipeline] echo 00:01:03.214 Sorcerer 10.211.164.20 is alive 00:01:03.224 [Pipeline] retry 00:01:03.226 [Pipeline] { 00:01:03.241 [Pipeline] httpRequest 00:01:03.245 HttpMethod: GET 00:01:03.246 URL: http://10.211.164.20/packages/spdk_0728de5b0db32c537468e1c1f0bb2b85c9971877.tar.gz 00:01:03.246 Sending request to url: http://10.211.164.20/packages/spdk_0728de5b0db32c537468e1c1f0bb2b85c9971877.tar.gz 00:01:03.247 Response Code: HTTP/1.1 200 OK 00:01:03.248 Success: Status code 200 is in the accepted range: 200,404 00:01:03.248 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk_0728de5b0db32c537468e1c1f0bb2b85c9971877.tar.gz 00:01:07.783 [Pipeline] } 00:01:07.802 [Pipeline] // retry 00:01:07.810 [Pipeline] sh 00:01:08.090 + tar --no-same-owner -xf spdk_0728de5b0db32c537468e1c1f0bb2b85c9971877.tar.gz 00:01:11.380 [Pipeline] sh 00:01:11.662 + git -C spdk log --oneline -n5 00:01:11.662 0728de5b0 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:01:11.662 349af566b nvmf: Get metadata config by not bdev but bdev_desc 00:01:11.662 1981e6eec bdevperf: Add hide_metadata option 00:01:11.662 66a383faf bdevperf: Get metadata config by not bdev but bdev_desc 00:01:11.662 25916e30c bdevperf: Store the result of DIF type check into job structure 00:01:11.681 [Pipeline] writeFile 00:01:11.696 [Pipeline] sh 00:01:11.979 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:11.990 [Pipeline] sh 00:01:12.270 + cat autorun-spdk.conf 00:01:12.270 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.270 SPDK_TEST_NVMF=1 00:01:12.270 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.270 SPDK_TEST_URING=1 00:01:12.270 SPDK_TEST_USDT=1 00:01:12.270 SPDK_RUN_UBSAN=1 00:01:12.270 NET_TYPE=virt 00:01:12.270 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.276 RUN_NIGHTLY=0 00:01:12.279 [Pipeline] } 00:01:12.295 [Pipeline] // stage 00:01:12.314 [Pipeline] stage 00:01:12.317 [Pipeline] { (Run VM) 00:01:12.331 [Pipeline] sh 00:01:12.611 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:12.611 + echo 'Start stage prepare_nvme.sh' 00:01:12.611 Start stage prepare_nvme.sh 00:01:12.611 + [[ -n 4 ]] 00:01:12.611 + disk_prefix=ex4 00:01:12.611 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 ]] 00:01:12.611 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf ]] 00:01:12.611 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf 00:01:12.611 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.611 ++ SPDK_TEST_NVMF=1 00:01:12.611 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.611 ++ SPDK_TEST_URING=1 00:01:12.611 ++ SPDK_TEST_USDT=1 00:01:12.611 ++ SPDK_RUN_UBSAN=1 00:01:12.611 ++ NET_TYPE=virt 00:01:12.611 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.611 ++ RUN_NIGHTLY=0 00:01:12.611 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:12.611 + nvme_files=() 00:01:12.611 + declare -A nvme_files 00:01:12.611 + backend_dir=/var/lib/libvirt/images/backends 00:01:12.611 + nvme_files['nvme.img']=5G 00:01:12.611 + nvme_files['nvme-cmb.img']=5G 00:01:12.611 + nvme_files['nvme-multi0.img']=4G 00:01:12.611 + nvme_files['nvme-multi1.img']=4G 00:01:12.611 + nvme_files['nvme-multi2.img']=4G 00:01:12.611 + nvme_files['nvme-openstack.img']=8G 00:01:12.611 + nvme_files['nvme-zns.img']=5G 00:01:12.611 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:12.611 + (( SPDK_TEST_FTL == 1 )) 00:01:12.611 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:12.611 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:12.611 + for nvme in "${!nvme_files[@]}" 00:01:12.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:12.611 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.611 + for nvme in "${!nvme_files[@]}" 00:01:12.611 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:13.179 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.179 + for nvme in "${!nvme_files[@]}" 00:01:13.179 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:13.179 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:13.179 + for nvme in "${!nvme_files[@]}" 00:01:13.179 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:13.179 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.179 + for nvme in "${!nvme_files[@]}" 00:01:13.179 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:13.179 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.179 + for nvme in "${!nvme_files[@]}" 00:01:13.179 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:13.457 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:13.457 + for nvme in "${!nvme_files[@]}" 00:01:13.457 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:14.046 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.046 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:14.046 + echo 'End stage prepare_nvme.sh' 00:01:14.046 End stage prepare_nvme.sh 00:01:14.058 [Pipeline] sh 00:01:14.338 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:14.338 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:14.338 00:01:14.338 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant 00:01:14.338 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk 00:01:14.338 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:14.338 HELP=0 00:01:14.338 DRY_RUN=0 00:01:14.338 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:14.338 NVME_DISKS_TYPE=nvme,nvme, 00:01:14.338 NVME_AUTO_CREATE=0 00:01:14.338 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:14.338 NVME_CMB=,, 00:01:14.338 NVME_PMR=,, 00:01:14.338 NVME_ZNS=,, 00:01:14.338 NVME_MS=,, 00:01:14.338 NVME_FDP=,, 00:01:14.338 SPDK_VAGRANT_DISTRO=fedora39 00:01:14.338 SPDK_VAGRANT_VMCPU=10 00:01:14.338 SPDK_VAGRANT_VMRAM=12288 00:01:14.338 SPDK_VAGRANT_PROVIDER=libvirt 00:01:14.338 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:14.338 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:14.338 SPDK_OPENSTACK_NETWORK=0 00:01:14.338 VAGRANT_PACKAGE_BOX=0 00:01:14.338 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:14.338 FORCE_DISTRO=true 00:01:14.338 VAGRANT_BOX_VERSION= 00:01:14.338 EXTRA_VAGRANTFILES= 00:01:14.338 NIC_MODEL=e1000 00:01:14.338 00:01:14.338 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt' 00:01:14.338 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2 00:01:17.621 Bringing machine 'default' up with 'libvirt' provider... 00:01:18.555 ==> default: Creating image (snapshot of base box volume). 00:01:18.555 ==> default: Creating domain with the following settings... 00:01:18.555 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732117756_df2593eb1330af4052de 00:01:18.556 ==> default: -- Domain type: kvm 00:01:18.556 ==> default: -- Cpus: 10 00:01:18.556 ==> default: -- Feature: acpi 00:01:18.556 ==> default: -- Feature: apic 00:01:18.556 ==> default: -- Feature: pae 00:01:18.556 ==> default: -- Memory: 12288M 00:01:18.556 ==> default: -- Memory Backing: hugepages: 00:01:18.556 ==> default: -- Management MAC: 00:01:18.556 ==> default: -- Loader: 00:01:18.556 ==> default: -- Nvram: 00:01:18.556 ==> default: -- Base box: spdk/fedora39 00:01:18.556 ==> default: -- Storage pool: default 00:01:18.556 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732117756_df2593eb1330af4052de.img (20G) 00:01:18.556 ==> default: -- Volume Cache: default 00:01:18.556 ==> default: -- Kernel: 00:01:18.556 ==> default: -- Initrd: 00:01:18.556 ==> default: -- Graphics Type: vnc 00:01:18.556 ==> default: -- Graphics Port: -1 00:01:18.556 ==> default: -- Graphics IP: 127.0.0.1 00:01:18.556 ==> default: -- Graphics Password: Not defined 00:01:18.556 ==> default: -- Video Type: cirrus 00:01:18.556 ==> default: -- Video VRAM: 9216 00:01:18.556 ==> default: -- Sound Type: 00:01:18.556 ==> default: -- Keymap: en-us 00:01:18.556 ==> default: -- TPM Path: 00:01:18.556 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:18.556 ==> default: -- Command line args: 00:01:18.556 ==> default: -> value=-device, 00:01:18.556 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:18.556 ==> default: -> value=-drive, 00:01:18.556 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:18.556 ==> default: -> value=-device, 00:01:18.556 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.556 ==> default: -> value=-device, 00:01:18.556 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:18.556 ==> default: -> value=-drive, 00:01:18.556 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:18.556 ==> default: -> value=-device, 00:01:18.556 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.556 ==> default: -> value=-drive, 00:01:18.556 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:18.556 ==> default: -> value=-device, 00:01:18.556 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.556 ==> default: -> value=-drive, 00:01:18.556 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:18.556 ==> default: -> value=-device, 00:01:18.556 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.814 ==> default: Creating shared folders metadata... 00:01:18.814 ==> default: Starting domain. 00:01:21.343 ==> default: Waiting for domain to get an IP address... 00:01:39.497 ==> default: Waiting for SSH to become available... 00:01:39.497 ==> default: Configuring and enabling network interfaces... 00:01:42.775 default: SSH address: 192.168.121.124:22 00:01:42.775 default: SSH username: vagrant 00:01:42.775 default: SSH auth method: private key 00:01:45.299 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:53.403 ==> default: Mounting SSHFS shared folder... 00:01:55.304 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:55.304 ==> default: Checking Mount.. 00:01:56.238 ==> default: Folder Successfully Mounted! 00:01:56.238 ==> default: Running provisioner: file... 00:01:57.172 default: ~/.gitconfig => .gitconfig 00:01:57.486 00:01:57.486 SUCCESS! 00:01:57.486 00:01:57.486 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:57.486 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:57.486 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:57.486 00:01:57.510 [Pipeline] } 00:01:57.528 [Pipeline] // stage 00:01:57.537 [Pipeline] dir 00:01:57.537 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/fedora39-libvirt 00:01:57.539 [Pipeline] { 00:01:57.554 [Pipeline] catchError 00:01:57.557 [Pipeline] { 00:01:57.571 [Pipeline] sh 00:01:57.849 + vagrant ssh-config --host vagrant 00:01:57.849 + sed -ne /^Host/,$p 00:01:57.849 + tee ssh_conf 00:02:02.031 Host vagrant 00:02:02.031 HostName 192.168.121.124 00:02:02.031 User vagrant 00:02:02.031 Port 22 00:02:02.031 UserKnownHostsFile /dev/null 00:02:02.031 StrictHostKeyChecking no 00:02:02.031 PasswordAuthentication no 00:02:02.031 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:02.031 IdentitiesOnly yes 00:02:02.031 LogLevel FATAL 00:02:02.031 ForwardAgent yes 00:02:02.031 ForwardX11 yes 00:02:02.031 00:02:02.044 [Pipeline] withEnv 00:02:02.047 [Pipeline] { 00:02:02.059 [Pipeline] sh 00:02:02.434 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:02.434 source /etc/os-release 00:02:02.434 [[ -e /image.version ]] && img=$(< /image.version) 00:02:02.434 # Minimal, systemd-like check. 00:02:02.434 if [[ -e /.dockerenv ]]; then 00:02:02.434 # Clear garbage from the node's name: 00:02:02.434 # agt-er_autotest_547-896 -> autotest_547-896 00:02:02.434 # $HOSTNAME is the actual container id 00:02:02.434 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:02.434 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:02.435 # We can assume this is a mount from a host where container is running, 00:02:02.435 # so fetch its hostname to easily identify the target swarm worker. 00:02:02.435 container="$(< /etc/hostname) ($agent)" 00:02:02.435 else 00:02:02.435 # Fallback 00:02:02.435 container=$agent 00:02:02.435 fi 00:02:02.435 fi 00:02:02.435 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:02.435 00:02:02.445 [Pipeline] } 00:02:02.467 [Pipeline] // withEnv 00:02:02.475 [Pipeline] setCustomBuildProperty 00:02:02.487 [Pipeline] stage 00:02:02.489 [Pipeline] { (Tests) 00:02:02.504 [Pipeline] sh 00:02:02.781 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:03.052 [Pipeline] sh 00:02:03.329 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:03.600 [Pipeline] timeout 00:02:03.601 Timeout set to expire in 1 hr 0 min 00:02:03.603 [Pipeline] { 00:02:03.617 [Pipeline] sh 00:02:03.896 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:04.462 HEAD is now at 0728de5b0 nvmf: Add hide_metadata option to nvmf_subsystem_add_ns 00:02:04.474 [Pipeline] sh 00:02:04.754 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:05.024 [Pipeline] sh 00:02:05.302 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:05.575 [Pipeline] sh 00:02:05.854 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:06.113 ++ readlink -f spdk_repo 00:02:06.113 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:06.113 + [[ -n /home/vagrant/spdk_repo ]] 00:02:06.113 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:06.113 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:06.113 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:06.113 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:06.113 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:06.113 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:06.113 + cd /home/vagrant/spdk_repo 00:02:06.113 + source /etc/os-release 00:02:06.113 ++ NAME='Fedora Linux' 00:02:06.113 ++ VERSION='39 (Cloud Edition)' 00:02:06.113 ++ ID=fedora 00:02:06.113 ++ VERSION_ID=39 00:02:06.113 ++ VERSION_CODENAME= 00:02:06.113 ++ PLATFORM_ID=platform:f39 00:02:06.113 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:06.113 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:06.113 ++ LOGO=fedora-logo-icon 00:02:06.113 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:06.113 ++ HOME_URL=https://fedoraproject.org/ 00:02:06.113 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:06.113 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:06.113 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:06.113 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:06.113 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:06.113 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:06.113 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:06.113 ++ SUPPORT_END=2024-11-12 00:02:06.113 ++ VARIANT='Cloud Edition' 00:02:06.113 ++ VARIANT_ID=cloud 00:02:06.113 + uname -a 00:02:06.113 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:06.113 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:06.372 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:06.372 Hugepages 00:02:06.372 node hugesize free / total 00:02:06.372 node0 1048576kB 0 / 0 00:02:06.372 node0 2048kB 0 / 0 00:02:06.372 00:02:06.372 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:06.631 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:06.631 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:06.631 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:06.631 + rm -f /tmp/spdk-ld-path 00:02:06.631 + source autorun-spdk.conf 00:02:06.631 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.631 ++ SPDK_TEST_NVMF=1 00:02:06.631 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.631 ++ SPDK_TEST_URING=1 00:02:06.631 ++ SPDK_TEST_USDT=1 00:02:06.631 ++ SPDK_RUN_UBSAN=1 00:02:06.631 ++ NET_TYPE=virt 00:02:06.631 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.631 ++ RUN_NIGHTLY=0 00:02:06.631 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:06.631 + [[ -n '' ]] 00:02:06.631 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:06.631 + for M in /var/spdk/build-*-manifest.txt 00:02:06.631 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:06.631 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.631 + for M in /var/spdk/build-*-manifest.txt 00:02:06.631 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:06.631 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.631 + for M in /var/spdk/build-*-manifest.txt 00:02:06.631 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:06.631 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:06.631 ++ uname 00:02:06.631 + [[ Linux == \L\i\n\u\x ]] 00:02:06.631 + sudo dmesg -T 00:02:06.631 + sudo dmesg --clear 00:02:06.631 + dmesg_pid=5372 00:02:06.631 + sudo dmesg -Tw 00:02:06.631 + [[ Fedora Linux == FreeBSD ]] 00:02:06.631 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.631 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:06.631 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:06.631 + [[ -x /usr/src/fio-static/fio ]] 00:02:06.631 + export FIO_BIN=/usr/src/fio-static/fio 00:02:06.631 + FIO_BIN=/usr/src/fio-static/fio 00:02:06.631 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:06.631 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:06.631 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:06.631 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.631 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:06.631 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:06.631 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.631 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:06.631 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:06.890 15:50:04 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:06.890 15:50:04 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:06.890 15:50:04 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.890 15:50:04 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:06.890 15:50:04 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:06.890 15:50:04 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:06.890 15:50:04 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:02:06.890 15:50:04 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:06.890 15:50:04 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:06.890 15:50:04 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.890 15:50:04 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:06.890 15:50:04 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:06.890 15:50:04 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:06.890 15:50:04 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:06.890 15:50:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:06.890 15:50:04 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:06.890 15:50:04 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:06.890 15:50:04 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.890 15:50:04 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.890 15:50:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.890 15:50:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.890 15:50:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.890 15:50:04 -- paths/export.sh@5 -- $ export PATH 00:02:06.890 15:50:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.890 15:50:04 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:06.890 15:50:04 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:06.890 15:50:04 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732117804.XXXXXX 00:02:06.890 15:50:04 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732117804.cD6YMP 00:02:06.890 15:50:04 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:06.890 15:50:04 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:06.890 15:50:04 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:06.890 15:50:04 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:06.890 15:50:04 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:06.890 15:50:04 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:06.890 15:50:04 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:06.890 15:50:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.890 15:50:04 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:02:06.890 15:50:04 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:06.890 15:50:04 -- pm/common@17 -- $ local monitor 00:02:06.890 15:50:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.890 15:50:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.890 15:50:04 -- pm/common@21 -- $ date +%s 00:02:06.890 15:50:04 -- pm/common@25 -- $ sleep 1 00:02:06.890 15:50:04 -- pm/common@21 -- $ date +%s 00:02:06.890 15:50:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732117804 00:02:06.890 15:50:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732117804 00:02:06.890 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732117804_collect-vmstat.pm.log 00:02:06.890 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732117804_collect-cpu-load.pm.log 00:02:07.825 15:50:05 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:07.825 15:50:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:07.825 15:50:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:07.825 15:50:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:07.825 15:50:05 -- spdk/autobuild.sh@16 -- $ date -u 00:02:07.825 Wed Nov 20 03:50:05 PM UTC 2024 00:02:07.825 15:50:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:07.825 v25.01-pre-241-g0728de5b0 00:02:07.825 15:50:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:07.825 15:50:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:07.825 15:50:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:07.825 15:50:05 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:07.825 15:50:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:07.825 15:50:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.825 ************************************ 00:02:07.825 START TEST ubsan 00:02:07.825 ************************************ 00:02:07.825 using ubsan 00:02:07.825 15:50:06 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:07.825 00:02:07.825 real 0m0.000s 00:02:07.825 user 0m0.000s 00:02:07.825 sys 0m0.000s 00:02:07.825 15:50:06 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:07.825 15:50:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:07.825 ************************************ 00:02:07.825 END TEST ubsan 00:02:07.825 ************************************ 00:02:07.825 15:50:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:07.825 15:50:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:07.825 15:50:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:07.825 15:50:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:07.825 15:50:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:07.825 15:50:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:07.825 15:50:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:07.825 15:50:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:07.825 15:50:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:02:08.083 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:08.083 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:08.648 Using 'verbs' RDMA provider 00:02:24.452 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:36.703 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:36.703 Creating mk/config.mk...done. 00:02:36.703 Creating mk/cc.flags.mk...done. 00:02:36.703 Type 'make' to build. 00:02:36.703 15:50:33 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:36.703 15:50:33 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:36.703 15:50:33 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.703 15:50:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.703 ************************************ 00:02:36.703 START TEST make 00:02:36.703 ************************************ 00:02:36.703 15:50:33 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:36.703 make[1]: Nothing to be done for 'all'. 00:02:48.913 The Meson build system 00:02:48.913 Version: 1.5.0 00:02:48.913 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:48.913 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:48.913 Build type: native build 00:02:48.913 Program cat found: YES (/usr/bin/cat) 00:02:48.913 Project name: DPDK 00:02:48.913 Project version: 24.03.0 00:02:48.913 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:48.913 C linker for the host machine: cc ld.bfd 2.40-14 00:02:48.913 Host machine cpu family: x86_64 00:02:48.913 Host machine cpu: x86_64 00:02:48.913 Message: ## Building in Developer Mode ## 00:02:48.913 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:48.913 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:48.913 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:48.913 Program python3 found: YES (/usr/bin/python3) 00:02:48.913 Program cat found: YES (/usr/bin/cat) 00:02:48.913 Compiler for C supports arguments -march=native: YES 00:02:48.913 Checking for size of "void *" : 8 00:02:48.913 Checking for size of "void *" : 8 (cached) 00:02:48.913 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:48.913 Library m found: YES 00:02:48.913 Library numa found: YES 00:02:48.913 Has header "numaif.h" : YES 00:02:48.913 Library fdt found: NO 00:02:48.913 Library execinfo found: NO 00:02:48.913 Has header "execinfo.h" : YES 00:02:48.913 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:48.913 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:48.913 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:48.913 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:48.913 Run-time dependency openssl found: YES 3.1.1 00:02:48.913 Run-time dependency libpcap found: YES 1.10.4 00:02:48.914 Has header "pcap.h" with dependency libpcap: YES 00:02:48.914 Compiler for C supports arguments -Wcast-qual: YES 00:02:48.914 Compiler for C supports arguments -Wdeprecated: YES 00:02:48.914 Compiler for C supports arguments -Wformat: YES 00:02:48.914 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:48.914 Compiler for C supports arguments -Wformat-security: NO 00:02:48.914 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:48.914 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:48.914 Compiler for C supports arguments -Wnested-externs: YES 00:02:48.914 Compiler for C supports arguments -Wold-style-definition: YES 00:02:48.914 Compiler for C supports arguments -Wpointer-arith: YES 00:02:48.914 Compiler for C supports arguments -Wsign-compare: YES 00:02:48.914 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:48.914 Compiler for C supports arguments -Wundef: YES 00:02:48.914 Compiler for C supports arguments -Wwrite-strings: YES 00:02:48.914 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:48.914 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:48.914 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:48.914 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:48.914 Program objdump found: YES (/usr/bin/objdump) 00:02:48.914 Compiler for C supports arguments -mavx512f: YES 00:02:48.914 Checking if "AVX512 checking" compiles: YES 00:02:48.914 Fetching value of define "__SSE4_2__" : 1 00:02:48.914 Fetching value of define "__AES__" : 1 00:02:48.914 Fetching value of define "__AVX__" : 1 00:02:48.914 Fetching value of define "__AVX2__" : 1 00:02:48.914 Fetching value of define "__AVX512BW__" : (undefined) 00:02:48.914 Fetching value of define "__AVX512CD__" : (undefined) 00:02:48.914 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:48.914 Fetching value of define "__AVX512F__" : (undefined) 00:02:48.914 Fetching value of define "__AVX512VL__" : (undefined) 00:02:48.914 Fetching value of define "__PCLMUL__" : 1 00:02:48.914 Fetching value of define "__RDRND__" : 1 00:02:48.914 Fetching value of define "__RDSEED__" : 1 00:02:48.914 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:48.914 Fetching value of define "__znver1__" : (undefined) 00:02:48.914 Fetching value of define "__znver2__" : (undefined) 00:02:48.914 Fetching value of define "__znver3__" : (undefined) 00:02:48.914 Fetching value of define "__znver4__" : (undefined) 00:02:48.914 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:48.914 Message: lib/log: Defining dependency "log" 00:02:48.914 Message: lib/kvargs: Defining dependency "kvargs" 00:02:48.914 Message: lib/telemetry: Defining dependency "telemetry" 00:02:48.914 Checking for function "getentropy" : NO 00:02:48.914 Message: lib/eal: Defining dependency "eal" 00:02:48.914 Message: lib/ring: Defining dependency "ring" 00:02:48.914 Message: lib/rcu: Defining dependency "rcu" 00:02:48.914 Message: lib/mempool: Defining dependency "mempool" 00:02:48.914 Message: lib/mbuf: Defining dependency "mbuf" 00:02:48.914 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:48.914 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:48.914 Compiler for C supports arguments -mpclmul: YES 00:02:48.914 Compiler for C supports arguments -maes: YES 00:02:48.914 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:48.914 Compiler for C supports arguments -mavx512bw: YES 00:02:48.914 Compiler for C supports arguments -mavx512dq: YES 00:02:48.914 Compiler for C supports arguments -mavx512vl: YES 00:02:48.914 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:48.914 Compiler for C supports arguments -mavx2: YES 00:02:48.914 Compiler for C supports arguments -mavx: YES 00:02:48.914 Message: lib/net: Defining dependency "net" 00:02:48.914 Message: lib/meter: Defining dependency "meter" 00:02:48.914 Message: lib/ethdev: Defining dependency "ethdev" 00:02:48.914 Message: lib/pci: Defining dependency "pci" 00:02:48.914 Message: lib/cmdline: Defining dependency "cmdline" 00:02:48.914 Message: lib/hash: Defining dependency "hash" 00:02:48.914 Message: lib/timer: Defining dependency "timer" 00:02:48.914 Message: lib/compressdev: Defining dependency "compressdev" 00:02:48.914 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:48.914 Message: lib/dmadev: Defining dependency "dmadev" 00:02:48.914 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:48.914 Message: lib/power: Defining dependency "power" 00:02:48.914 Message: lib/reorder: Defining dependency "reorder" 00:02:48.914 Message: lib/security: Defining dependency "security" 00:02:48.914 Has header "linux/userfaultfd.h" : YES 00:02:48.914 Has header "linux/vduse.h" : YES 00:02:48.914 Message: lib/vhost: Defining dependency "vhost" 00:02:48.914 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:48.914 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:48.914 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:48.914 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:48.914 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:48.914 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:48.914 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:48.914 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:48.914 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:48.914 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:48.914 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:48.914 Configuring doxy-api-html.conf using configuration 00:02:48.914 Configuring doxy-api-man.conf using configuration 00:02:48.914 Program mandb found: YES (/usr/bin/mandb) 00:02:48.914 Program sphinx-build found: NO 00:02:48.914 Configuring rte_build_config.h using configuration 00:02:48.914 Message: 00:02:48.914 ================= 00:02:48.914 Applications Enabled 00:02:48.914 ================= 00:02:48.914 00:02:48.914 apps: 00:02:48.914 00:02:48.914 00:02:48.914 Message: 00:02:48.914 ================= 00:02:48.914 Libraries Enabled 00:02:48.914 ================= 00:02:48.914 00:02:48.914 libs: 00:02:48.914 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:48.914 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:48.914 cryptodev, dmadev, power, reorder, security, vhost, 00:02:48.914 00:02:48.914 Message: 00:02:48.914 =============== 00:02:48.914 Drivers Enabled 00:02:48.914 =============== 00:02:48.914 00:02:48.914 common: 00:02:48.914 00:02:48.914 bus: 00:02:48.914 pci, vdev, 00:02:48.914 mempool: 00:02:48.914 ring, 00:02:48.914 dma: 00:02:48.914 00:02:48.914 net: 00:02:48.914 00:02:48.914 crypto: 00:02:48.914 00:02:48.914 compress: 00:02:48.914 00:02:48.914 vdpa: 00:02:48.914 00:02:48.914 00:02:48.914 Message: 00:02:48.914 ================= 00:02:48.914 Content Skipped 00:02:48.914 ================= 00:02:48.914 00:02:48.914 apps: 00:02:48.914 dumpcap: explicitly disabled via build config 00:02:48.914 graph: explicitly disabled via build config 00:02:48.914 pdump: explicitly disabled via build config 00:02:48.915 proc-info: explicitly disabled via build config 00:02:48.915 test-acl: explicitly disabled via build config 00:02:48.915 test-bbdev: explicitly disabled via build config 00:02:48.915 test-cmdline: explicitly disabled via build config 00:02:48.915 test-compress-perf: explicitly disabled via build config 00:02:48.915 test-crypto-perf: explicitly disabled via build config 00:02:48.915 test-dma-perf: explicitly disabled via build config 00:02:48.915 test-eventdev: explicitly disabled via build config 00:02:48.915 test-fib: explicitly disabled via build config 00:02:48.915 test-flow-perf: explicitly disabled via build config 00:02:48.915 test-gpudev: explicitly disabled via build config 00:02:48.915 test-mldev: explicitly disabled via build config 00:02:48.915 test-pipeline: explicitly disabled via build config 00:02:48.915 test-pmd: explicitly disabled via build config 00:02:48.915 test-regex: explicitly disabled via build config 00:02:48.915 test-sad: explicitly disabled via build config 00:02:48.915 test-security-perf: explicitly disabled via build config 00:02:48.915 00:02:48.915 libs: 00:02:48.915 argparse: explicitly disabled via build config 00:02:48.915 metrics: explicitly disabled via build config 00:02:48.915 acl: explicitly disabled via build config 00:02:48.915 bbdev: explicitly disabled via build config 00:02:48.915 bitratestats: explicitly disabled via build config 00:02:48.915 bpf: explicitly disabled via build config 00:02:48.915 cfgfile: explicitly disabled via build config 00:02:48.915 distributor: explicitly disabled via build config 00:02:48.915 efd: explicitly disabled via build config 00:02:48.915 eventdev: explicitly disabled via build config 00:02:48.915 dispatcher: explicitly disabled via build config 00:02:48.915 gpudev: explicitly disabled via build config 00:02:48.915 gro: explicitly disabled via build config 00:02:48.915 gso: explicitly disabled via build config 00:02:48.915 ip_frag: explicitly disabled via build config 00:02:48.915 jobstats: explicitly disabled via build config 00:02:48.915 latencystats: explicitly disabled via build config 00:02:48.915 lpm: explicitly disabled via build config 00:02:48.915 member: explicitly disabled via build config 00:02:48.915 pcapng: explicitly disabled via build config 00:02:48.915 rawdev: explicitly disabled via build config 00:02:48.915 regexdev: explicitly disabled via build config 00:02:48.915 mldev: explicitly disabled via build config 00:02:48.915 rib: explicitly disabled via build config 00:02:48.915 sched: explicitly disabled via build config 00:02:48.915 stack: explicitly disabled via build config 00:02:48.915 ipsec: explicitly disabled via build config 00:02:48.915 pdcp: explicitly disabled via build config 00:02:48.915 fib: explicitly disabled via build config 00:02:48.915 port: explicitly disabled via build config 00:02:48.915 pdump: explicitly disabled via build config 00:02:48.915 table: explicitly disabled via build config 00:02:48.915 pipeline: explicitly disabled via build config 00:02:48.915 graph: explicitly disabled via build config 00:02:48.915 node: explicitly disabled via build config 00:02:48.915 00:02:48.915 drivers: 00:02:48.915 common/cpt: not in enabled drivers build config 00:02:48.915 common/dpaax: not in enabled drivers build config 00:02:48.915 common/iavf: not in enabled drivers build config 00:02:48.915 common/idpf: not in enabled drivers build config 00:02:48.915 common/ionic: not in enabled drivers build config 00:02:48.915 common/mvep: not in enabled drivers build config 00:02:48.915 common/octeontx: not in enabled drivers build config 00:02:48.915 bus/auxiliary: not in enabled drivers build config 00:02:48.915 bus/cdx: not in enabled drivers build config 00:02:48.915 bus/dpaa: not in enabled drivers build config 00:02:48.915 bus/fslmc: not in enabled drivers build config 00:02:48.915 bus/ifpga: not in enabled drivers build config 00:02:48.915 bus/platform: not in enabled drivers build config 00:02:48.915 bus/uacce: not in enabled drivers build config 00:02:48.915 bus/vmbus: not in enabled drivers build config 00:02:48.915 common/cnxk: not in enabled drivers build config 00:02:48.915 common/mlx5: not in enabled drivers build config 00:02:48.915 common/nfp: not in enabled drivers build config 00:02:48.915 common/nitrox: not in enabled drivers build config 00:02:48.915 common/qat: not in enabled drivers build config 00:02:48.915 common/sfc_efx: not in enabled drivers build config 00:02:48.915 mempool/bucket: not in enabled drivers build config 00:02:48.915 mempool/cnxk: not in enabled drivers build config 00:02:48.915 mempool/dpaa: not in enabled drivers build config 00:02:48.915 mempool/dpaa2: not in enabled drivers build config 00:02:48.915 mempool/octeontx: not in enabled drivers build config 00:02:48.915 mempool/stack: not in enabled drivers build config 00:02:48.915 dma/cnxk: not in enabled drivers build config 00:02:48.915 dma/dpaa: not in enabled drivers build config 00:02:48.915 dma/dpaa2: not in enabled drivers build config 00:02:48.915 dma/hisilicon: not in enabled drivers build config 00:02:48.915 dma/idxd: not in enabled drivers build config 00:02:48.915 dma/ioat: not in enabled drivers build config 00:02:48.915 dma/skeleton: not in enabled drivers build config 00:02:48.915 net/af_packet: not in enabled drivers build config 00:02:48.915 net/af_xdp: not in enabled drivers build config 00:02:48.915 net/ark: not in enabled drivers build config 00:02:48.915 net/atlantic: not in enabled drivers build config 00:02:48.915 net/avp: not in enabled drivers build config 00:02:48.915 net/axgbe: not in enabled drivers build config 00:02:48.915 net/bnx2x: not in enabled drivers build config 00:02:48.915 net/bnxt: not in enabled drivers build config 00:02:48.915 net/bonding: not in enabled drivers build config 00:02:48.915 net/cnxk: not in enabled drivers build config 00:02:48.915 net/cpfl: not in enabled drivers build config 00:02:48.915 net/cxgbe: not in enabled drivers build config 00:02:48.915 net/dpaa: not in enabled drivers build config 00:02:48.915 net/dpaa2: not in enabled drivers build config 00:02:48.915 net/e1000: not in enabled drivers build config 00:02:48.915 net/ena: not in enabled drivers build config 00:02:48.915 net/enetc: not in enabled drivers build config 00:02:48.915 net/enetfec: not in enabled drivers build config 00:02:48.915 net/enic: not in enabled drivers build config 00:02:48.915 net/failsafe: not in enabled drivers build config 00:02:48.915 net/fm10k: not in enabled drivers build config 00:02:48.915 net/gve: not in enabled drivers build config 00:02:48.915 net/hinic: not in enabled drivers build config 00:02:48.915 net/hns3: not in enabled drivers build config 00:02:48.915 net/i40e: not in enabled drivers build config 00:02:48.915 net/iavf: not in enabled drivers build config 00:02:48.915 net/ice: not in enabled drivers build config 00:02:48.915 net/idpf: not in enabled drivers build config 00:02:48.915 net/igc: not in enabled drivers build config 00:02:48.915 net/ionic: not in enabled drivers build config 00:02:48.915 net/ipn3ke: not in enabled drivers build config 00:02:48.915 net/ixgbe: not in enabled drivers build config 00:02:48.915 net/mana: not in enabled drivers build config 00:02:48.915 net/memif: not in enabled drivers build config 00:02:48.915 net/mlx4: not in enabled drivers build config 00:02:48.915 net/mlx5: not in enabled drivers build config 00:02:48.915 net/mvneta: not in enabled drivers build config 00:02:48.915 net/mvpp2: not in enabled drivers build config 00:02:48.915 net/netvsc: not in enabled drivers build config 00:02:48.915 net/nfb: not in enabled drivers build config 00:02:48.915 net/nfp: not in enabled drivers build config 00:02:48.915 net/ngbe: not in enabled drivers build config 00:02:48.915 net/null: not in enabled drivers build config 00:02:48.915 net/octeontx: not in enabled drivers build config 00:02:48.915 net/octeon_ep: not in enabled drivers build config 00:02:48.915 net/pcap: not in enabled drivers build config 00:02:48.915 net/pfe: not in enabled drivers build config 00:02:48.915 net/qede: not in enabled drivers build config 00:02:48.915 net/ring: not in enabled drivers build config 00:02:48.915 net/sfc: not in enabled drivers build config 00:02:48.915 net/softnic: not in enabled drivers build config 00:02:48.915 net/tap: not in enabled drivers build config 00:02:48.915 net/thunderx: not in enabled drivers build config 00:02:48.915 net/txgbe: not in enabled drivers build config 00:02:48.915 net/vdev_netvsc: not in enabled drivers build config 00:02:48.915 net/vhost: not in enabled drivers build config 00:02:48.915 net/virtio: not in enabled drivers build config 00:02:48.915 net/vmxnet3: not in enabled drivers build config 00:02:48.916 raw/*: missing internal dependency, "rawdev" 00:02:48.916 crypto/armv8: not in enabled drivers build config 00:02:48.916 crypto/bcmfs: not in enabled drivers build config 00:02:48.916 crypto/caam_jr: not in enabled drivers build config 00:02:48.916 crypto/ccp: not in enabled drivers build config 00:02:48.916 crypto/cnxk: not in enabled drivers build config 00:02:48.916 crypto/dpaa_sec: not in enabled drivers build config 00:02:48.916 crypto/dpaa2_sec: not in enabled drivers build config 00:02:48.916 crypto/ipsec_mb: not in enabled drivers build config 00:02:48.916 crypto/mlx5: not in enabled drivers build config 00:02:48.916 crypto/mvsam: not in enabled drivers build config 00:02:48.916 crypto/nitrox: not in enabled drivers build config 00:02:48.916 crypto/null: not in enabled drivers build config 00:02:48.916 crypto/octeontx: not in enabled drivers build config 00:02:48.916 crypto/openssl: not in enabled drivers build config 00:02:48.916 crypto/scheduler: not in enabled drivers build config 00:02:48.916 crypto/uadk: not in enabled drivers build config 00:02:48.916 crypto/virtio: not in enabled drivers build config 00:02:48.916 compress/isal: not in enabled drivers build config 00:02:48.916 compress/mlx5: not in enabled drivers build config 00:02:48.916 compress/nitrox: not in enabled drivers build config 00:02:48.916 compress/octeontx: not in enabled drivers build config 00:02:48.916 compress/zlib: not in enabled drivers build config 00:02:48.916 regex/*: missing internal dependency, "regexdev" 00:02:48.916 ml/*: missing internal dependency, "mldev" 00:02:48.916 vdpa/ifc: not in enabled drivers build config 00:02:48.916 vdpa/mlx5: not in enabled drivers build config 00:02:48.916 vdpa/nfp: not in enabled drivers build config 00:02:48.916 vdpa/sfc: not in enabled drivers build config 00:02:48.916 event/*: missing internal dependency, "eventdev" 00:02:48.916 baseband/*: missing internal dependency, "bbdev" 00:02:48.916 gpu/*: missing internal dependency, "gpudev" 00:02:48.916 00:02:48.916 00:02:48.916 Build targets in project: 85 00:02:48.916 00:02:48.916 DPDK 24.03.0 00:02:48.916 00:02:48.916 User defined options 00:02:48.916 buildtype : debug 00:02:48.916 default_library : shared 00:02:48.916 libdir : lib 00:02:48.916 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:48.916 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:48.916 c_link_args : 00:02:48.916 cpu_instruction_set: native 00:02:48.916 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:48.916 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:48.916 enable_docs : false 00:02:48.916 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:48.916 enable_kmods : false 00:02:48.916 max_lcores : 128 00:02:48.916 tests : false 00:02:48.916 00:02:48.916 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:49.481 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:49.481 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:49.481 [2/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:49.481 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:49.739 [4/268] Linking static target lib/librte_kvargs.a 00:02:49.739 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:49.739 [6/268] Linking static target lib/librte_log.a 00:02:50.305 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.305 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:50.305 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:50.305 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:50.305 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:50.305 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:50.305 [13/268] Linking static target lib/librte_telemetry.a 00:02:50.305 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:50.563 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:50.563 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:50.563 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:50.820 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.820 [19/268] Linking target lib/librte_log.so.24.1 00:02:50.820 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:51.077 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:51.077 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:51.335 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:51.335 [24/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.335 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:51.592 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:51.592 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:51.592 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:51.592 [29/268] Linking target lib/librte_telemetry.so.24.1 00:02:51.592 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:51.592 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:51.592 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:51.592 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:51.592 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:51.925 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:51.925 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:51.925 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:52.199 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:52.456 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:52.456 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:52.456 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:52.456 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:52.456 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:52.456 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:52.456 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:52.715 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:52.715 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:52.715 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:52.973 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:52.973 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:52.973 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:53.231 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:53.489 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:53.489 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:53.489 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:53.747 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:53.747 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:53.747 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:53.747 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:53.747 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:53.747 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:54.004 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:54.004 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:54.261 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:54.519 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:54.519 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:54.519 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:54.776 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:54.776 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:54.776 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:54.776 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:54.776 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:54.777 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:54.777 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:55.034 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:55.034 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:55.293 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:55.293 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:55.550 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:55.550 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:55.550 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:55.550 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:55.550 [83/268] Linking static target lib/librte_ring.a 00:02:55.808 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:55.808 [85/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:55.808 [86/268] Linking static target lib/librte_rcu.a 00:02:55.808 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:55.808 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:56.067 [89/268] Linking static target lib/librte_eal.a 00:02:56.067 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:56.067 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:56.324 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.324 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:56.324 [94/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.324 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:56.324 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:56.582 [97/268] Linking static target lib/librte_mempool.a 00:02:56.582 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:56.582 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:56.582 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:56.582 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:56.582 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:56.582 [103/268] Linking static target lib/librte_mbuf.a 00:02:57.179 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:57.179 [105/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:57.179 [106/268] Linking static target lib/librte_meter.a 00:02:57.179 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:57.179 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:57.179 [109/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:57.179 [110/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:57.437 [111/268] Linking static target lib/librte_net.a 00:02:57.437 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:57.437 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.695 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.695 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.695 [116/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.952 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:57.952 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:57.952 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:58.518 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:58.518 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:58.775 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:58.775 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:58.775 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:58.775 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:58.775 [126/268] Linking static target lib/librte_pci.a 00:02:58.775 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:59.033 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:59.033 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:59.033 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:59.033 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:59.290 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:59.290 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:59.290 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.290 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:59.290 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:59.290 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:59.290 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:59.548 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:59.548 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:59.548 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:59.548 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:59.548 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:59.548 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:59.548 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:59.548 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:59.548 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:59.804 [148/268] Linking static target lib/librte_ethdev.a 00:02:59.804 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:59.804 [150/268] Linking static target lib/librte_cmdline.a 00:03:00.062 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:00.319 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:00.319 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:00.319 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:00.319 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:00.578 [156/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:00.578 [157/268] Linking static target lib/librte_timer.a 00:03:00.836 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:00.836 [159/268] Linking static target lib/librte_hash.a 00:03:00.836 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:00.836 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:00.836 [162/268] Linking static target lib/librte_compressdev.a 00:03:01.094 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:01.094 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:01.094 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:01.356 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.356 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:01.356 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:01.614 [169/268] Linking static target lib/librte_dmadev.a 00:03:01.614 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.614 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:01.614 [172/268] Linking static target lib/librte_cryptodev.a 00:03:01.614 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:01.872 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:01.872 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:01.872 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.130 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.130 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:02.388 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:02.388 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:02.388 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.388 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:02.388 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:02.646 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:02.905 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:02.905 [186/268] Linking static target lib/librte_power.a 00:03:03.163 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:03.163 [188/268] Linking static target lib/librte_security.a 00:03:03.163 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:03.163 [190/268] Linking static target lib/librte_reorder.a 00:03:03.163 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:03.421 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.421 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:03.679 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:03.936 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.936 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.195 [197/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.195 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.195 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:04.453 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:04.453 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:04.711 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.711 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.711 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.969 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:05.307 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:05.307 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:05.307 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:05.307 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:05.307 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:05.307 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:05.307 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.579 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:05.579 [214/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.579 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.579 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.579 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.579 [218/268] Linking static target drivers/librte_bus_vdev.a 00:03:05.579 [219/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.579 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.579 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.579 [222/268] Linking static target drivers/librte_bus_pci.a 00:03:05.579 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.837 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.837 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.837 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:05.837 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.095 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.024 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:07.024 [230/268] Linking static target lib/librte_vhost.a 00:03:07.590 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.847 [232/268] Linking target lib/librte_eal.so.24.1 00:03:07.847 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:07.847 [234/268] Linking target lib/librte_timer.so.24.1 00:03:07.847 [235/268] Linking target lib/librte_meter.so.24.1 00:03:07.847 [236/268] Linking target lib/librte_ring.so.24.1 00:03:07.847 [237/268] Linking target lib/librte_pci.so.24.1 00:03:07.847 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:08.104 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:08.104 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:08.104 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:08.104 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:08.104 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:08.104 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:08.104 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:08.104 [246/268] Linking target lib/librte_mempool.so.24.1 00:03:08.104 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:08.104 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.361 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:08.361 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:08.361 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:08.361 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:08.619 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.619 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:08.619 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:08.619 [256/268] Linking target lib/librte_reorder.so.24.1 00:03:08.619 [257/268] Linking target lib/librte_net.so.24.1 00:03:08.619 [258/268] Linking target lib/librte_compressdev.so.24.1 00:03:08.877 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:08.877 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:08.877 [261/268] Linking target lib/librte_security.so.24.1 00:03:08.877 [262/268] Linking target lib/librte_cmdline.so.24.1 00:03:08.877 [263/268] Linking target lib/librte_hash.so.24.1 00:03:08.877 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:08.877 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:09.136 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:09.136 [267/268] Linking target lib/librte_power.so.24.1 00:03:09.136 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:09.136 INFO: autodetecting backend as ninja 00:03:09.136 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:35.683 CC lib/ut/ut.o 00:03:35.683 CC lib/log/log_deprecated.o 00:03:35.683 CC lib/log/log.o 00:03:35.683 CC lib/log/log_flags.o 00:03:35.683 CC lib/ut_mock/mock.o 00:03:35.683 LIB libspdk_ut.a 00:03:35.683 SO libspdk_ut.so.2.0 00:03:35.683 LIB libspdk_log.a 00:03:35.683 SYMLINK libspdk_ut.so 00:03:35.683 SO libspdk_log.so.7.1 00:03:35.683 LIB libspdk_ut_mock.a 00:03:35.683 SO libspdk_ut_mock.so.6.0 00:03:35.683 SYMLINK libspdk_log.so 00:03:35.941 SYMLINK libspdk_ut_mock.so 00:03:35.941 CC lib/ioat/ioat.o 00:03:35.941 CC lib/dma/dma.o 00:03:35.941 CXX lib/trace_parser/trace.o 00:03:35.941 CC lib/util/base64.o 00:03:35.941 CC lib/util/bit_array.o 00:03:35.941 CC lib/util/cpuset.o 00:03:35.941 CC lib/util/crc16.o 00:03:35.941 CC lib/util/crc32.o 00:03:35.941 CC lib/util/crc32c.o 00:03:36.198 CC lib/vfio_user/host/vfio_user_pci.o 00:03:36.198 CC lib/util/crc32_ieee.o 00:03:36.198 CC lib/vfio_user/host/vfio_user.o 00:03:36.198 CC lib/util/crc64.o 00:03:36.198 LIB libspdk_dma.a 00:03:36.198 CC lib/util/dif.o 00:03:36.198 SO libspdk_dma.so.5.0 00:03:36.198 SYMLINK libspdk_dma.so 00:03:36.198 CC lib/util/fd.o 00:03:36.198 CC lib/util/fd_group.o 00:03:36.198 CC lib/util/file.o 00:03:36.456 CC lib/util/hexlify.o 00:03:36.456 LIB libspdk_ioat.a 00:03:36.456 SO libspdk_ioat.so.7.0 00:03:36.456 CC lib/util/iov.o 00:03:36.456 CC lib/util/math.o 00:03:36.456 LIB libspdk_vfio_user.a 00:03:36.456 SYMLINK libspdk_ioat.so 00:03:36.456 CC lib/util/net.o 00:03:36.456 SO libspdk_vfio_user.so.5.0 00:03:36.456 CC lib/util/pipe.o 00:03:36.456 CC lib/util/strerror_tls.o 00:03:36.456 CC lib/util/string.o 00:03:36.456 SYMLINK libspdk_vfio_user.so 00:03:36.456 CC lib/util/uuid.o 00:03:36.456 CC lib/util/xor.o 00:03:36.714 CC lib/util/zipf.o 00:03:36.714 CC lib/util/md5.o 00:03:36.973 LIB libspdk_util.a 00:03:37.230 SO libspdk_util.so.10.1 00:03:37.230 LIB libspdk_trace_parser.a 00:03:37.230 SO libspdk_trace_parser.so.6.0 00:03:37.230 SYMLINK libspdk_util.so 00:03:37.230 SYMLINK libspdk_trace_parser.so 00:03:37.488 CC lib/json/json_parse.o 00:03:37.488 CC lib/json/json_util.o 00:03:37.488 CC lib/json/json_write.o 00:03:37.488 CC lib/conf/conf.o 00:03:37.488 CC lib/idxd/idxd.o 00:03:37.488 CC lib/env_dpdk/env.o 00:03:37.488 CC lib/env_dpdk/memory.o 00:03:37.488 CC lib/idxd/idxd_user.o 00:03:37.488 CC lib/rdma_utils/rdma_utils.o 00:03:37.488 CC lib/vmd/vmd.o 00:03:37.745 CC lib/env_dpdk/pci.o 00:03:37.745 CC lib/env_dpdk/init.o 00:03:37.745 CC lib/env_dpdk/threads.o 00:03:38.003 LIB libspdk_json.a 00:03:38.003 LIB libspdk_conf.a 00:03:38.003 SO libspdk_conf.so.6.0 00:03:38.003 SO libspdk_json.so.6.0 00:03:38.003 LIB libspdk_rdma_utils.a 00:03:38.003 SYMLINK libspdk_conf.so 00:03:38.003 SYMLINK libspdk_json.so 00:03:38.003 CC lib/vmd/led.o 00:03:38.003 CC lib/env_dpdk/pci_ioat.o 00:03:38.003 CC lib/env_dpdk/pci_virtio.o 00:03:38.003 SO libspdk_rdma_utils.so.1.0 00:03:38.003 SYMLINK libspdk_rdma_utils.so 00:03:38.003 CC lib/idxd/idxd_kernel.o 00:03:38.003 CC lib/env_dpdk/pci_vmd.o 00:03:38.260 CC lib/env_dpdk/pci_idxd.o 00:03:38.260 CC lib/env_dpdk/pci_event.o 00:03:38.260 LIB libspdk_vmd.a 00:03:38.260 CC lib/env_dpdk/sigbus_handler.o 00:03:38.260 SO libspdk_vmd.so.6.0 00:03:38.260 CC lib/env_dpdk/pci_dpdk.o 00:03:38.260 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:38.260 LIB libspdk_idxd.a 00:03:38.260 CC lib/rdma_provider/common.o 00:03:38.260 SYMLINK libspdk_vmd.so 00:03:38.260 CC lib/jsonrpc/jsonrpc_server.o 00:03:38.260 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:38.260 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:38.260 SO libspdk_idxd.so.12.1 00:03:38.260 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:38.518 SYMLINK libspdk_idxd.so 00:03:38.519 CC lib/jsonrpc/jsonrpc_client.o 00:03:38.519 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:38.519 LIB libspdk_rdma_provider.a 00:03:38.777 LIB libspdk_jsonrpc.a 00:03:38.777 SO libspdk_rdma_provider.so.7.0 00:03:38.777 SO libspdk_jsonrpc.so.6.0 00:03:38.777 SYMLINK libspdk_rdma_provider.so 00:03:38.777 SYMLINK libspdk_jsonrpc.so 00:03:39.035 LIB libspdk_env_dpdk.a 00:03:39.035 CC lib/rpc/rpc.o 00:03:39.035 SO libspdk_env_dpdk.so.15.1 00:03:39.293 SYMLINK libspdk_env_dpdk.so 00:03:39.293 LIB libspdk_rpc.a 00:03:39.293 SO libspdk_rpc.so.6.0 00:03:39.293 SYMLINK libspdk_rpc.so 00:03:39.552 CC lib/keyring/keyring.o 00:03:39.552 CC lib/keyring/keyring_rpc.o 00:03:39.552 CC lib/trace/trace.o 00:03:39.552 CC lib/notify/notify.o 00:03:39.552 CC lib/trace/trace_flags.o 00:03:39.552 CC lib/notify/notify_rpc.o 00:03:39.552 CC lib/trace/trace_rpc.o 00:03:39.810 LIB libspdk_notify.a 00:03:39.810 SO libspdk_notify.so.6.0 00:03:39.810 LIB libspdk_trace.a 00:03:39.810 LIB libspdk_keyring.a 00:03:39.810 SO libspdk_trace.so.11.0 00:03:40.068 SYMLINK libspdk_notify.so 00:03:40.068 SO libspdk_keyring.so.2.0 00:03:40.068 SYMLINK libspdk_trace.so 00:03:40.068 SYMLINK libspdk_keyring.so 00:03:40.325 CC lib/sock/sock.o 00:03:40.325 CC lib/sock/sock_rpc.o 00:03:40.325 CC lib/thread/iobuf.o 00:03:40.325 CC lib/thread/thread.o 00:03:40.970 LIB libspdk_sock.a 00:03:40.970 SO libspdk_sock.so.10.0 00:03:40.970 SYMLINK libspdk_sock.so 00:03:41.231 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:41.231 CC lib/nvme/nvme_ctrlr.o 00:03:41.231 CC lib/nvme/nvme_ns.o 00:03:41.231 CC lib/nvme/nvme_fabric.o 00:03:41.231 CC lib/nvme/nvme_ns_cmd.o 00:03:41.231 CC lib/nvme/nvme_qpair.o 00:03:41.231 CC lib/nvme/nvme_pcie_common.o 00:03:41.231 CC lib/nvme/nvme_pcie.o 00:03:41.231 CC lib/nvme/nvme.o 00:03:42.165 LIB libspdk_thread.a 00:03:42.165 CC lib/nvme/nvme_quirks.o 00:03:42.165 SO libspdk_thread.so.11.0 00:03:42.165 CC lib/nvme/nvme_transport.o 00:03:42.165 CC lib/nvme/nvme_discovery.o 00:03:42.165 SYMLINK libspdk_thread.so 00:03:42.165 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:42.165 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:42.165 CC lib/nvme/nvme_tcp.o 00:03:42.165 CC lib/nvme/nvme_opal.o 00:03:42.422 CC lib/nvme/nvme_io_msg.o 00:03:42.422 CC lib/nvme/nvme_poll_group.o 00:03:42.679 CC lib/nvme/nvme_zns.o 00:03:42.679 CC lib/nvme/nvme_stubs.o 00:03:42.679 CC lib/nvme/nvme_auth.o 00:03:42.679 CC lib/nvme/nvme_cuse.o 00:03:42.679 CC lib/nvme/nvme_rdma.o 00:03:42.937 CC lib/accel/accel.o 00:03:43.195 CC lib/blob/blobstore.o 00:03:43.453 CC lib/init/json_config.o 00:03:43.453 CC lib/virtio/virtio.o 00:03:43.453 CC lib/fsdev/fsdev.o 00:03:43.453 CC lib/virtio/virtio_vhost_user.o 00:03:43.710 CC lib/init/subsystem.o 00:03:43.710 CC lib/virtio/virtio_vfio_user.o 00:03:43.710 CC lib/virtio/virtio_pci.o 00:03:43.711 CC lib/init/subsystem_rpc.o 00:03:43.711 CC lib/init/rpc.o 00:03:43.711 CC lib/blob/request.o 00:03:43.969 CC lib/accel/accel_rpc.o 00:03:43.969 CC lib/accel/accel_sw.o 00:03:43.969 CC lib/fsdev/fsdev_io.o 00:03:43.969 LIB libspdk_init.a 00:03:43.969 LIB libspdk_virtio.a 00:03:44.227 SO libspdk_init.so.6.0 00:03:44.227 SO libspdk_virtio.so.7.0 00:03:44.227 CC lib/fsdev/fsdev_rpc.o 00:03:44.227 CC lib/blob/zeroes.o 00:03:44.227 CC lib/blob/blob_bs_dev.o 00:03:44.227 SYMLINK libspdk_init.so 00:03:44.227 SYMLINK libspdk_virtio.so 00:03:44.227 LIB libspdk_nvme.a 00:03:44.227 LIB libspdk_accel.a 00:03:44.227 SO libspdk_accel.so.16.0 00:03:44.485 LIB libspdk_fsdev.a 00:03:44.485 CC lib/event/app.o 00:03:44.485 CC lib/event/reactor.o 00:03:44.485 CC lib/event/app_rpc.o 00:03:44.485 CC lib/event/log_rpc.o 00:03:44.485 CC lib/event/scheduler_static.o 00:03:44.485 SYMLINK libspdk_accel.so 00:03:44.485 SO libspdk_nvme.so.15.0 00:03:44.485 SO libspdk_fsdev.so.2.0 00:03:44.485 SYMLINK libspdk_fsdev.so 00:03:44.485 CC lib/bdev/bdev.o 00:03:44.485 CC lib/bdev/bdev_rpc.o 00:03:44.485 CC lib/bdev/bdev_zone.o 00:03:44.485 CC lib/bdev/part.o 00:03:44.744 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:44.744 CC lib/bdev/scsi_nvme.o 00:03:44.744 SYMLINK libspdk_nvme.so 00:03:45.002 LIB libspdk_event.a 00:03:45.002 SO libspdk_event.so.14.0 00:03:45.002 SYMLINK libspdk_event.so 00:03:45.261 LIB libspdk_fuse_dispatcher.a 00:03:45.519 SO libspdk_fuse_dispatcher.so.1.0 00:03:45.519 SYMLINK libspdk_fuse_dispatcher.so 00:03:46.107 LIB libspdk_blob.a 00:03:46.365 SO libspdk_blob.so.11.0 00:03:46.365 SYMLINK libspdk_blob.so 00:03:46.624 CC lib/lvol/lvol.o 00:03:46.624 CC lib/blobfs/tree.o 00:03:46.624 CC lib/blobfs/blobfs.o 00:03:47.563 LIB libspdk_bdev.a 00:03:47.563 LIB libspdk_blobfs.a 00:03:47.563 SO libspdk_bdev.so.17.0 00:03:47.563 SO libspdk_blobfs.so.10.0 00:03:47.563 LIB libspdk_lvol.a 00:03:47.563 SO libspdk_lvol.so.10.0 00:03:47.563 SYMLINK libspdk_blobfs.so 00:03:47.821 SYMLINK libspdk_bdev.so 00:03:47.821 SYMLINK libspdk_lvol.so 00:03:47.821 CC lib/nvmf/ctrlr.o 00:03:47.821 CC lib/ftl/ftl_core.o 00:03:47.821 CC lib/nvmf/ctrlr_discovery.o 00:03:47.821 CC lib/nbd/nbd.o 00:03:47.821 CC lib/nvmf/ctrlr_bdev.o 00:03:47.821 CC lib/ftl/ftl_init.o 00:03:47.821 CC lib/scsi/dev.o 00:03:47.821 CC lib/nbd/nbd_rpc.o 00:03:47.821 CC lib/ftl/ftl_layout.o 00:03:47.821 CC lib/ublk/ublk.o 00:03:48.079 CC lib/ftl/ftl_debug.o 00:03:48.337 CC lib/ftl/ftl_io.o 00:03:48.337 CC lib/scsi/lun.o 00:03:48.337 CC lib/ftl/ftl_sb.o 00:03:48.337 CC lib/ftl/ftl_l2p.o 00:03:48.337 CC lib/scsi/port.o 00:03:48.596 LIB libspdk_nbd.a 00:03:48.596 CC lib/nvmf/subsystem.o 00:03:48.596 CC lib/nvmf/nvmf.o 00:03:48.596 SO libspdk_nbd.so.7.0 00:03:48.596 SYMLINK libspdk_nbd.so 00:03:48.596 CC lib/nvmf/nvmf_rpc.o 00:03:48.596 CC lib/ublk/ublk_rpc.o 00:03:48.596 CC lib/nvmf/transport.o 00:03:48.596 CC lib/scsi/scsi.o 00:03:48.596 CC lib/ftl/ftl_l2p_flat.o 00:03:48.596 CC lib/ftl/ftl_nv_cache.o 00:03:48.596 CC lib/ftl/ftl_band.o 00:03:48.854 LIB libspdk_ublk.a 00:03:48.854 SO libspdk_ublk.so.3.0 00:03:48.854 CC lib/scsi/scsi_bdev.o 00:03:48.854 CC lib/ftl/ftl_band_ops.o 00:03:48.854 SYMLINK libspdk_ublk.so 00:03:48.854 CC lib/scsi/scsi_pr.o 00:03:49.112 CC lib/ftl/ftl_writer.o 00:03:49.370 CC lib/scsi/scsi_rpc.o 00:03:49.370 CC lib/ftl/ftl_rq.o 00:03:49.370 CC lib/ftl/ftl_reloc.o 00:03:49.370 CC lib/scsi/task.o 00:03:49.370 CC lib/nvmf/tcp.o 00:03:49.630 CC lib/nvmf/stubs.o 00:03:49.630 CC lib/nvmf/mdns_server.o 00:03:49.630 CC lib/nvmf/rdma.o 00:03:49.630 CC lib/nvmf/auth.o 00:03:49.630 CC lib/ftl/ftl_l2p_cache.o 00:03:49.630 LIB libspdk_scsi.a 00:03:49.888 CC lib/ftl/ftl_p2l.o 00:03:49.888 CC lib/ftl/ftl_p2l_log.o 00:03:49.888 SO libspdk_scsi.so.9.0 00:03:49.888 CC lib/ftl/mngt/ftl_mngt.o 00:03:49.888 SYMLINK libspdk_scsi.so 00:03:49.888 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:49.888 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:50.147 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:50.147 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:50.147 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:50.147 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:50.147 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:50.147 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:50.404 CC lib/iscsi/conn.o 00:03:50.404 CC lib/iscsi/init_grp.o 00:03:50.404 CC lib/iscsi/iscsi.o 00:03:50.404 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:50.404 CC lib/iscsi/param.o 00:03:50.404 CC lib/iscsi/portal_grp.o 00:03:50.662 CC lib/vhost/vhost.o 00:03:50.662 CC lib/iscsi/tgt_node.o 00:03:50.662 CC lib/iscsi/iscsi_subsystem.o 00:03:50.662 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:50.662 CC lib/iscsi/iscsi_rpc.o 00:03:50.920 CC lib/iscsi/task.o 00:03:50.920 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:50.920 CC lib/vhost/vhost_rpc.o 00:03:50.920 CC lib/vhost/vhost_scsi.o 00:03:50.920 CC lib/vhost/vhost_blk.o 00:03:51.178 CC lib/vhost/rte_vhost_user.o 00:03:51.178 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:51.178 CC lib/ftl/utils/ftl_conf.o 00:03:51.435 CC lib/ftl/utils/ftl_md.o 00:03:51.435 CC lib/ftl/utils/ftl_mempool.o 00:03:51.435 CC lib/ftl/utils/ftl_bitmap.o 00:03:51.435 CC lib/ftl/utils/ftl_property.o 00:03:51.693 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:51.693 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:51.693 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:51.693 LIB libspdk_nvmf.a 00:03:51.693 LIB libspdk_iscsi.a 00:03:51.693 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:51.950 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:51.950 SO libspdk_iscsi.so.8.0 00:03:51.950 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:51.950 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:51.950 SO libspdk_nvmf.so.20.0 00:03:51.950 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:51.950 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:51.950 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:51.950 SYMLINK libspdk_iscsi.so 00:03:51.950 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:52.208 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:52.208 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:52.208 SYMLINK libspdk_nvmf.so 00:03:52.208 CC lib/ftl/base/ftl_base_dev.o 00:03:52.208 CC lib/ftl/base/ftl_base_bdev.o 00:03:52.208 CC lib/ftl/ftl_trace.o 00:03:52.208 LIB libspdk_vhost.a 00:03:52.465 SO libspdk_vhost.so.8.0 00:03:52.465 LIB libspdk_ftl.a 00:03:52.465 SYMLINK libspdk_vhost.so 00:03:52.723 SO libspdk_ftl.so.9.0 00:03:52.980 SYMLINK libspdk_ftl.so 00:03:53.545 CC module/env_dpdk/env_dpdk_rpc.o 00:03:53.545 CC module/fsdev/aio/fsdev_aio.o 00:03:53.545 CC module/blob/bdev/blob_bdev.o 00:03:53.545 CC module/accel/error/accel_error.o 00:03:53.545 CC module/keyring/linux/keyring.o 00:03:53.545 CC module/keyring/file/keyring.o 00:03:53.545 CC module/sock/uring/uring.o 00:03:53.545 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:53.545 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:53.545 CC module/sock/posix/posix.o 00:03:53.545 LIB libspdk_env_dpdk_rpc.a 00:03:53.545 SO libspdk_env_dpdk_rpc.so.6.0 00:03:53.803 SYMLINK libspdk_env_dpdk_rpc.so 00:03:53.803 CC module/keyring/file/keyring_rpc.o 00:03:53.803 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:53.803 CC module/keyring/linux/keyring_rpc.o 00:03:53.803 LIB libspdk_scheduler_dpdk_governor.a 00:03:53.803 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:53.803 LIB libspdk_scheduler_dynamic.a 00:03:53.803 SO libspdk_scheduler_dynamic.so.4.0 00:03:53.803 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:53.803 CC module/fsdev/aio/linux_aio_mgr.o 00:03:53.803 CC module/accel/error/accel_error_rpc.o 00:03:53.803 LIB libspdk_blob_bdev.a 00:03:53.803 LIB libspdk_keyring_file.a 00:03:53.803 LIB libspdk_keyring_linux.a 00:03:53.803 SYMLINK libspdk_scheduler_dynamic.so 00:03:53.803 SO libspdk_blob_bdev.so.11.0 00:03:53.803 SO libspdk_keyring_linux.so.1.0 00:03:53.803 SO libspdk_keyring_file.so.2.0 00:03:53.803 SYMLINK libspdk_keyring_linux.so 00:03:54.113 SYMLINK libspdk_keyring_file.so 00:03:54.113 SYMLINK libspdk_blob_bdev.so 00:03:54.113 LIB libspdk_accel_error.a 00:03:54.113 SO libspdk_accel_error.so.2.0 00:03:54.113 SYMLINK libspdk_accel_error.so 00:03:54.113 CC module/scheduler/gscheduler/gscheduler.o 00:03:54.113 CC module/accel/dsa/accel_dsa.o 00:03:54.113 CC module/accel/iaa/accel_iaa.o 00:03:54.113 CC module/accel/ioat/accel_ioat.o 00:03:54.113 LIB libspdk_fsdev_aio.a 00:03:54.371 SO libspdk_fsdev_aio.so.1.0 00:03:54.371 LIB libspdk_sock_uring.a 00:03:54.371 CC module/bdev/delay/vbdev_delay.o 00:03:54.371 CC module/bdev/error/vbdev_error.o 00:03:54.371 SO libspdk_sock_uring.so.5.0 00:03:54.371 LIB libspdk_sock_posix.a 00:03:54.371 SYMLINK libspdk_fsdev_aio.so 00:03:54.371 LIB libspdk_scheduler_gscheduler.a 00:03:54.371 SO libspdk_sock_posix.so.6.0 00:03:54.371 CC module/blobfs/bdev/blobfs_bdev.o 00:03:54.371 CC module/bdev/error/vbdev_error_rpc.o 00:03:54.371 SYMLINK libspdk_sock_uring.so 00:03:54.371 CC module/accel/iaa/accel_iaa_rpc.o 00:03:54.371 SO libspdk_scheduler_gscheduler.so.4.0 00:03:54.371 CC module/accel/ioat/accel_ioat_rpc.o 00:03:54.371 SYMLINK libspdk_scheduler_gscheduler.so 00:03:54.371 SYMLINK libspdk_sock_posix.so 00:03:54.371 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:54.371 CC module/accel/dsa/accel_dsa_rpc.o 00:03:54.630 LIB libspdk_accel_iaa.a 00:03:54.630 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:54.630 SO libspdk_accel_iaa.so.3.0 00:03:54.630 LIB libspdk_accel_ioat.a 00:03:54.630 LIB libspdk_accel_dsa.a 00:03:54.630 SO libspdk_accel_ioat.so.6.0 00:03:54.630 SO libspdk_accel_dsa.so.5.0 00:03:54.630 SYMLINK libspdk_accel_iaa.so 00:03:54.630 LIB libspdk_bdev_error.a 00:03:54.630 CC module/bdev/lvol/vbdev_lvol.o 00:03:54.630 CC module/bdev/gpt/gpt.o 00:03:54.630 LIB libspdk_blobfs_bdev.a 00:03:54.630 SO libspdk_bdev_error.so.6.0 00:03:54.630 SYMLINK libspdk_accel_ioat.so 00:03:54.630 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:54.630 SYMLINK libspdk_accel_dsa.so 00:03:54.630 SO libspdk_blobfs_bdev.so.6.0 00:03:54.630 CC module/bdev/malloc/bdev_malloc.o 00:03:54.630 LIB libspdk_bdev_delay.a 00:03:54.630 SYMLINK libspdk_bdev_error.so 00:03:54.888 SYMLINK libspdk_blobfs_bdev.so 00:03:54.888 SO libspdk_bdev_delay.so.6.0 00:03:54.888 CC module/bdev/null/bdev_null.o 00:03:54.888 SYMLINK libspdk_bdev_delay.so 00:03:54.888 CC module/bdev/gpt/vbdev_gpt.o 00:03:54.888 CC module/bdev/nvme/bdev_nvme.o 00:03:54.888 CC module/bdev/passthru/vbdev_passthru.o 00:03:54.888 CC module/bdev/raid/bdev_raid.o 00:03:54.888 CC module/bdev/split/vbdev_split.o 00:03:55.146 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:55.146 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:55.146 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:55.146 CC module/bdev/null/bdev_null_rpc.o 00:03:55.146 LIB libspdk_bdev_gpt.a 00:03:55.146 LIB libspdk_bdev_lvol.a 00:03:55.146 SO libspdk_bdev_gpt.so.6.0 00:03:55.146 SO libspdk_bdev_lvol.so.6.0 00:03:55.146 CC module/bdev/split/vbdev_split_rpc.o 00:03:55.146 SYMLINK libspdk_bdev_gpt.so 00:03:55.146 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:55.146 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:55.146 CC module/bdev/raid/bdev_raid_rpc.o 00:03:55.146 SYMLINK libspdk_bdev_lvol.so 00:03:55.146 CC module/bdev/nvme/nvme_rpc.o 00:03:55.405 LIB libspdk_bdev_malloc.a 00:03:55.405 LIB libspdk_bdev_null.a 00:03:55.405 SO libspdk_bdev_malloc.so.6.0 00:03:55.405 SO libspdk_bdev_null.so.6.0 00:03:55.405 LIB libspdk_bdev_zone_block.a 00:03:55.405 SYMLINK libspdk_bdev_malloc.so 00:03:55.405 CC module/bdev/nvme/bdev_mdns_client.o 00:03:55.405 LIB libspdk_bdev_split.a 00:03:55.405 SO libspdk_bdev_zone_block.so.6.0 00:03:55.405 SYMLINK libspdk_bdev_null.so 00:03:55.405 CC module/bdev/nvme/vbdev_opal.o 00:03:55.405 SO libspdk_bdev_split.so.6.0 00:03:55.405 LIB libspdk_bdev_passthru.a 00:03:55.405 SYMLINK libspdk_bdev_zone_block.so 00:03:55.405 SO libspdk_bdev_passthru.so.6.0 00:03:55.405 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:55.405 SYMLINK libspdk_bdev_split.so 00:03:55.405 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:55.405 CC module/bdev/raid/bdev_raid_sb.o 00:03:55.405 SYMLINK libspdk_bdev_passthru.so 00:03:55.405 CC module/bdev/raid/raid0.o 00:03:55.663 CC module/bdev/raid/raid1.o 00:03:55.663 CC module/bdev/uring/bdev_uring.o 00:03:55.663 CC module/bdev/uring/bdev_uring_rpc.o 00:03:55.663 CC module/bdev/raid/concat.o 00:03:55.922 CC module/bdev/aio/bdev_aio.o 00:03:55.922 CC module/bdev/aio/bdev_aio_rpc.o 00:03:55.922 LIB libspdk_bdev_raid.a 00:03:55.922 CC module/bdev/ftl/bdev_ftl.o 00:03:55.922 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:55.922 SO libspdk_bdev_raid.so.6.0 00:03:56.180 LIB libspdk_bdev_uring.a 00:03:56.180 CC module/bdev/iscsi/bdev_iscsi.o 00:03:56.180 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:56.180 SO libspdk_bdev_uring.so.6.0 00:03:56.180 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:56.180 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:56.180 SYMLINK libspdk_bdev_raid.so 00:03:56.180 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:56.180 SYMLINK libspdk_bdev_uring.so 00:03:56.180 LIB libspdk_bdev_aio.a 00:03:56.180 SO libspdk_bdev_aio.so.6.0 00:03:56.180 SYMLINK libspdk_bdev_aio.so 00:03:56.439 LIB libspdk_bdev_ftl.a 00:03:56.439 SO libspdk_bdev_ftl.so.6.0 00:03:56.439 SYMLINK libspdk_bdev_ftl.so 00:03:56.439 LIB libspdk_bdev_iscsi.a 00:03:56.439 SO libspdk_bdev_iscsi.so.6.0 00:03:56.697 SYMLINK libspdk_bdev_iscsi.so 00:03:56.697 LIB libspdk_bdev_virtio.a 00:03:56.697 SO libspdk_bdev_virtio.so.6.0 00:03:56.697 SYMLINK libspdk_bdev_virtio.so 00:03:57.632 LIB libspdk_bdev_nvme.a 00:03:57.632 SO libspdk_bdev_nvme.so.7.1 00:03:57.890 SYMLINK libspdk_bdev_nvme.so 00:03:58.147 CC module/event/subsystems/fsdev/fsdev.o 00:03:58.147 CC module/event/subsystems/sock/sock.o 00:03:58.147 CC module/event/subsystems/scheduler/scheduler.o 00:03:58.147 CC module/event/subsystems/keyring/keyring.o 00:03:58.147 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:58.147 CC module/event/subsystems/vmd/vmd.o 00:03:58.147 CC module/event/subsystems/iobuf/iobuf.o 00:03:58.147 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:58.147 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:58.405 LIB libspdk_event_fsdev.a 00:03:58.405 LIB libspdk_event_scheduler.a 00:03:58.405 LIB libspdk_event_keyring.a 00:03:58.405 LIB libspdk_event_sock.a 00:03:58.405 SO libspdk_event_scheduler.so.4.0 00:03:58.405 SO libspdk_event_keyring.so.1.0 00:03:58.405 SO libspdk_event_fsdev.so.1.0 00:03:58.405 SO libspdk_event_sock.so.5.0 00:03:58.405 LIB libspdk_event_iobuf.a 00:03:58.405 LIB libspdk_event_vhost_blk.a 00:03:58.405 LIB libspdk_event_vmd.a 00:03:58.405 SYMLINK libspdk_event_keyring.so 00:03:58.405 SYMLINK libspdk_event_scheduler.so 00:03:58.405 SYMLINK libspdk_event_fsdev.so 00:03:58.405 SO libspdk_event_vhost_blk.so.3.0 00:03:58.405 SO libspdk_event_vmd.so.6.0 00:03:58.405 SO libspdk_event_iobuf.so.3.0 00:03:58.405 SYMLINK libspdk_event_sock.so 00:03:58.663 SYMLINK libspdk_event_vhost_blk.so 00:03:58.663 SYMLINK libspdk_event_vmd.so 00:03:58.663 SYMLINK libspdk_event_iobuf.so 00:03:58.920 CC module/event/subsystems/accel/accel.o 00:03:58.920 LIB libspdk_event_accel.a 00:03:58.920 SO libspdk_event_accel.so.6.0 00:03:59.178 SYMLINK libspdk_event_accel.so 00:03:59.435 CC module/event/subsystems/bdev/bdev.o 00:03:59.692 LIB libspdk_event_bdev.a 00:03:59.693 SO libspdk_event_bdev.so.6.0 00:03:59.693 SYMLINK libspdk_event_bdev.so 00:03:59.950 CC module/event/subsystems/scsi/scsi.o 00:03:59.950 CC module/event/subsystems/nbd/nbd.o 00:03:59.950 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:59.950 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:59.950 CC module/event/subsystems/ublk/ublk.o 00:04:00.209 LIB libspdk_event_nbd.a 00:04:00.209 LIB libspdk_event_ublk.a 00:04:00.209 LIB libspdk_event_scsi.a 00:04:00.209 SO libspdk_event_nbd.so.6.0 00:04:00.209 SO libspdk_event_ublk.so.3.0 00:04:00.209 SO libspdk_event_scsi.so.6.0 00:04:00.209 SYMLINK libspdk_event_nbd.so 00:04:00.209 SYMLINK libspdk_event_ublk.so 00:04:00.209 SYMLINK libspdk_event_scsi.so 00:04:00.209 LIB libspdk_event_nvmf.a 00:04:00.209 SO libspdk_event_nvmf.so.6.0 00:04:00.466 SYMLINK libspdk_event_nvmf.so 00:04:00.466 CC module/event/subsystems/iscsi/iscsi.o 00:04:00.466 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:00.724 LIB libspdk_event_vhost_scsi.a 00:04:00.724 SO libspdk_event_vhost_scsi.so.3.0 00:04:00.724 LIB libspdk_event_iscsi.a 00:04:00.724 SYMLINK libspdk_event_vhost_scsi.so 00:04:00.724 SO libspdk_event_iscsi.so.6.0 00:04:00.980 SYMLINK libspdk_event_iscsi.so 00:04:00.980 SO libspdk.so.6.0 00:04:00.980 SYMLINK libspdk.so 00:04:01.237 CXX app/trace/trace.o 00:04:01.237 CC app/spdk_lspci/spdk_lspci.o 00:04:01.237 CC app/trace_record/trace_record.o 00:04:01.237 CC app/spdk_nvme_perf/perf.o 00:04:01.237 CC app/spdk_nvme_identify/identify.o 00:04:01.494 CC app/nvmf_tgt/nvmf_main.o 00:04:01.494 CC app/spdk_tgt/spdk_tgt.o 00:04:01.494 CC examples/util/zipf/zipf.o 00:04:01.494 CC test/thread/poller_perf/poller_perf.o 00:04:01.494 CC app/iscsi_tgt/iscsi_tgt.o 00:04:01.494 LINK spdk_lspci 00:04:01.752 LINK spdk_trace_record 00:04:01.752 LINK nvmf_tgt 00:04:01.752 LINK spdk_tgt 00:04:01.752 LINK zipf 00:04:01.752 LINK poller_perf 00:04:01.752 LINK iscsi_tgt 00:04:01.752 LINK spdk_trace 00:04:02.009 CC app/spdk_nvme_discover/discovery_aer.o 00:04:02.009 CC app/spdk_top/spdk_top.o 00:04:02.009 CC examples/ioat/perf/perf.o 00:04:02.009 CC app/spdk_dd/spdk_dd.o 00:04:02.009 CC examples/vmd/lsvmd/lsvmd.o 00:04:02.009 CC examples/vmd/led/led.o 00:04:02.009 CC test/dma/test_dma/test_dma.o 00:04:02.269 LINK spdk_nvme_discover 00:04:02.269 CC examples/ioat/verify/verify.o 00:04:02.269 LINK spdk_nvme_identify 00:04:02.269 LINK lsvmd 00:04:02.269 LINK led 00:04:02.269 LINK spdk_nvme_perf 00:04:02.269 LINK ioat_perf 00:04:02.527 LINK verify 00:04:02.527 LINK spdk_dd 00:04:02.527 CC app/fio/nvme/fio_plugin.o 00:04:02.527 CC app/vhost/vhost.o 00:04:02.527 CC examples/idxd/perf/perf.o 00:04:02.527 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:02.785 CC test/app/bdev_svc/bdev_svc.o 00:04:02.785 CC examples/thread/thread/thread_ex.o 00:04:02.785 LINK test_dma 00:04:02.785 LINK vhost 00:04:02.785 LINK interrupt_tgt 00:04:02.785 CC examples/sock/hello_world/hello_sock.o 00:04:02.785 CC app/fio/bdev/fio_plugin.o 00:04:03.043 LINK spdk_top 00:04:03.043 LINK bdev_svc 00:04:03.043 LINK idxd_perf 00:04:03.043 LINK thread 00:04:03.043 TEST_HEADER include/spdk/accel.h 00:04:03.043 TEST_HEADER include/spdk/accel_module.h 00:04:03.043 LINK spdk_nvme 00:04:03.043 TEST_HEADER include/spdk/assert.h 00:04:03.043 TEST_HEADER include/spdk/barrier.h 00:04:03.043 TEST_HEADER include/spdk/base64.h 00:04:03.043 TEST_HEADER include/spdk/bdev.h 00:04:03.043 TEST_HEADER include/spdk/bdev_module.h 00:04:03.043 TEST_HEADER include/spdk/bdev_zone.h 00:04:03.043 TEST_HEADER include/spdk/bit_array.h 00:04:03.043 TEST_HEADER include/spdk/bit_pool.h 00:04:03.043 LINK hello_sock 00:04:03.043 TEST_HEADER include/spdk/blob_bdev.h 00:04:03.043 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:03.043 TEST_HEADER include/spdk/blobfs.h 00:04:03.043 TEST_HEADER include/spdk/blob.h 00:04:03.043 TEST_HEADER include/spdk/conf.h 00:04:03.302 TEST_HEADER include/spdk/config.h 00:04:03.302 TEST_HEADER include/spdk/cpuset.h 00:04:03.302 TEST_HEADER include/spdk/crc16.h 00:04:03.302 TEST_HEADER include/spdk/crc32.h 00:04:03.302 TEST_HEADER include/spdk/crc64.h 00:04:03.303 TEST_HEADER include/spdk/dif.h 00:04:03.303 TEST_HEADER include/spdk/dma.h 00:04:03.303 TEST_HEADER include/spdk/endian.h 00:04:03.303 TEST_HEADER include/spdk/env_dpdk.h 00:04:03.303 TEST_HEADER include/spdk/env.h 00:04:03.303 CC test/app/histogram_perf/histogram_perf.o 00:04:03.303 TEST_HEADER include/spdk/event.h 00:04:03.303 TEST_HEADER include/spdk/fd_group.h 00:04:03.303 TEST_HEADER include/spdk/fd.h 00:04:03.303 TEST_HEADER include/spdk/file.h 00:04:03.303 TEST_HEADER include/spdk/fsdev.h 00:04:03.303 TEST_HEADER include/spdk/fsdev_module.h 00:04:03.303 TEST_HEADER include/spdk/ftl.h 00:04:03.303 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:03.303 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:03.303 TEST_HEADER include/spdk/gpt_spec.h 00:04:03.303 TEST_HEADER include/spdk/hexlify.h 00:04:03.303 TEST_HEADER include/spdk/histogram_data.h 00:04:03.303 TEST_HEADER include/spdk/idxd.h 00:04:03.303 TEST_HEADER include/spdk/idxd_spec.h 00:04:03.303 TEST_HEADER include/spdk/init.h 00:04:03.303 TEST_HEADER include/spdk/ioat.h 00:04:03.303 TEST_HEADER include/spdk/ioat_spec.h 00:04:03.303 TEST_HEADER include/spdk/iscsi_spec.h 00:04:03.303 TEST_HEADER include/spdk/json.h 00:04:03.303 TEST_HEADER include/spdk/jsonrpc.h 00:04:03.303 TEST_HEADER include/spdk/keyring.h 00:04:03.303 TEST_HEADER include/spdk/keyring_module.h 00:04:03.303 TEST_HEADER include/spdk/likely.h 00:04:03.303 TEST_HEADER include/spdk/log.h 00:04:03.303 TEST_HEADER include/spdk/lvol.h 00:04:03.303 TEST_HEADER include/spdk/md5.h 00:04:03.303 TEST_HEADER include/spdk/memory.h 00:04:03.303 TEST_HEADER include/spdk/mmio.h 00:04:03.303 TEST_HEADER include/spdk/nbd.h 00:04:03.303 TEST_HEADER include/spdk/net.h 00:04:03.303 TEST_HEADER include/spdk/notify.h 00:04:03.303 TEST_HEADER include/spdk/nvme.h 00:04:03.303 TEST_HEADER include/spdk/nvme_intel.h 00:04:03.303 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:03.303 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:03.303 TEST_HEADER include/spdk/nvme_spec.h 00:04:03.303 TEST_HEADER include/spdk/nvme_zns.h 00:04:03.303 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:03.303 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:03.303 TEST_HEADER include/spdk/nvmf.h 00:04:03.303 TEST_HEADER include/spdk/nvmf_spec.h 00:04:03.303 TEST_HEADER include/spdk/nvmf_transport.h 00:04:03.303 TEST_HEADER include/spdk/opal.h 00:04:03.303 TEST_HEADER include/spdk/opal_spec.h 00:04:03.303 TEST_HEADER include/spdk/pci_ids.h 00:04:03.303 TEST_HEADER include/spdk/pipe.h 00:04:03.303 TEST_HEADER include/spdk/queue.h 00:04:03.303 TEST_HEADER include/spdk/reduce.h 00:04:03.303 TEST_HEADER include/spdk/rpc.h 00:04:03.303 TEST_HEADER include/spdk/scheduler.h 00:04:03.303 TEST_HEADER include/spdk/scsi.h 00:04:03.303 TEST_HEADER include/spdk/scsi_spec.h 00:04:03.303 TEST_HEADER include/spdk/sock.h 00:04:03.303 TEST_HEADER include/spdk/stdinc.h 00:04:03.303 CC test/event/reactor/reactor.o 00:04:03.303 TEST_HEADER include/spdk/string.h 00:04:03.303 TEST_HEADER include/spdk/thread.h 00:04:03.303 CC test/event/event_perf/event_perf.o 00:04:03.303 CC test/env/mem_callbacks/mem_callbacks.o 00:04:03.303 TEST_HEADER include/spdk/trace.h 00:04:03.303 TEST_HEADER include/spdk/trace_parser.h 00:04:03.303 TEST_HEADER include/spdk/tree.h 00:04:03.303 TEST_HEADER include/spdk/ublk.h 00:04:03.303 TEST_HEADER include/spdk/util.h 00:04:03.303 TEST_HEADER include/spdk/uuid.h 00:04:03.303 CC test/nvme/aer/aer.o 00:04:03.303 TEST_HEADER include/spdk/version.h 00:04:03.303 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:03.303 CC test/event/reactor_perf/reactor_perf.o 00:04:03.303 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:03.303 TEST_HEADER include/spdk/vhost.h 00:04:03.303 TEST_HEADER include/spdk/vmd.h 00:04:03.303 TEST_HEADER include/spdk/xor.h 00:04:03.303 TEST_HEADER include/spdk/zipf.h 00:04:03.303 CXX test/cpp_headers/accel.o 00:04:03.303 LINK histogram_perf 00:04:03.303 LINK spdk_bdev 00:04:03.561 LINK reactor 00:04:03.561 LINK event_perf 00:04:03.561 LINK reactor_perf 00:04:03.561 CC examples/accel/perf/accel_perf.o 00:04:03.561 CXX test/cpp_headers/accel_module.o 00:04:03.561 LINK aer 00:04:03.561 LINK nvme_fuzz 00:04:03.820 CXX test/cpp_headers/assert.o 00:04:03.820 CC examples/nvme/hello_world/hello_world.o 00:04:03.820 CC examples/blob/hello_world/hello_blob.o 00:04:03.820 CC test/env/vtophys/vtophys.o 00:04:03.820 CC test/event/app_repeat/app_repeat.o 00:04:03.820 CXX test/cpp_headers/barrier.o 00:04:03.820 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:03.820 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:03.820 CC test/nvme/reset/reset.o 00:04:04.079 LINK vtophys 00:04:04.079 LINK mem_callbacks 00:04:04.079 LINK app_repeat 00:04:04.079 LINK accel_perf 00:04:04.079 LINK hello_blob 00:04:04.079 LINK hello_world 00:04:04.079 CXX test/cpp_headers/base64.o 00:04:04.079 CXX test/cpp_headers/bdev.o 00:04:04.337 LINK hello_fsdev 00:04:04.337 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:04.337 LINK reset 00:04:04.337 CXX test/cpp_headers/bdev_module.o 00:04:04.337 CC test/app/jsoncat/jsoncat.o 00:04:04.337 CC test/event/scheduler/scheduler.o 00:04:04.337 CC examples/nvme/reconnect/reconnect.o 00:04:04.337 CC test/app/stub/stub.o 00:04:04.337 CC examples/blob/cli/blobcli.o 00:04:04.337 LINK env_dpdk_post_init 00:04:04.595 LINK jsoncat 00:04:04.595 CC test/nvme/sgl/sgl.o 00:04:04.595 CC test/env/memory/memory_ut.o 00:04:04.595 CXX test/cpp_headers/bdev_zone.o 00:04:04.595 LINK stub 00:04:04.595 LINK scheduler 00:04:04.595 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:04.854 CXX test/cpp_headers/bit_array.o 00:04:04.854 LINK sgl 00:04:04.854 LINK reconnect 00:04:04.854 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:04.854 CC examples/bdev/hello_world/hello_bdev.o 00:04:04.854 CC examples/bdev/bdevperf/bdevperf.o 00:04:04.854 LINK blobcli 00:04:05.112 CC test/rpc_client/rpc_client_test.o 00:04:05.112 CXX test/cpp_headers/bit_pool.o 00:04:05.112 CC test/nvme/e2edp/nvme_dp.o 00:04:05.112 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:05.112 LINK rpc_client_test 00:04:05.112 LINK hello_bdev 00:04:05.112 CXX test/cpp_headers/blob_bdev.o 00:04:05.370 CC test/env/pci/pci_ut.o 00:04:05.370 LINK vhost_fuzz 00:04:05.370 CXX test/cpp_headers/blobfs_bdev.o 00:04:05.370 LINK nvme_dp 00:04:05.629 CC test/accel/dif/dif.o 00:04:05.629 CXX test/cpp_headers/blobfs.o 00:04:05.629 LINK iscsi_fuzz 00:04:05.629 CC test/blobfs/mkfs/mkfs.o 00:04:05.629 LINK nvme_manage 00:04:05.629 LINK bdevperf 00:04:05.887 CC test/nvme/overhead/overhead.o 00:04:05.887 LINK pci_ut 00:04:05.887 CC test/lvol/esnap/esnap.o 00:04:05.887 CXX test/cpp_headers/blob.o 00:04:05.887 LINK memory_ut 00:04:05.887 CXX test/cpp_headers/conf.o 00:04:05.887 LINK mkfs 00:04:05.887 CC examples/nvme/arbitration/arbitration.o 00:04:06.146 CXX test/cpp_headers/config.o 00:04:06.146 CXX test/cpp_headers/cpuset.o 00:04:06.146 CC test/nvme/err_injection/err_injection.o 00:04:06.146 LINK overhead 00:04:06.146 CXX test/cpp_headers/crc16.o 00:04:06.146 CC examples/nvme/hotplug/hotplug.o 00:04:06.146 CC test/nvme/startup/startup.o 00:04:06.146 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:06.146 CXX test/cpp_headers/crc32.o 00:04:06.405 LINK err_injection 00:04:06.405 CC examples/nvme/abort/abort.o 00:04:06.405 LINK dif 00:04:06.405 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:06.405 LINK arbitration 00:04:06.405 LINK startup 00:04:06.405 LINK hotplug 00:04:06.405 LINK cmb_copy 00:04:06.405 CXX test/cpp_headers/crc64.o 00:04:06.405 CC test/nvme/reserve/reserve.o 00:04:06.405 CXX test/cpp_headers/dif.o 00:04:06.713 LINK pmr_persistence 00:04:06.713 CXX test/cpp_headers/dma.o 00:04:06.713 CC test/nvme/simple_copy/simple_copy.o 00:04:06.713 CC test/nvme/connect_stress/connect_stress.o 00:04:06.713 CXX test/cpp_headers/endian.o 00:04:06.713 CC test/nvme/boot_partition/boot_partition.o 00:04:06.713 CXX test/cpp_headers/env_dpdk.o 00:04:06.713 LINK abort 00:04:06.713 CC test/nvme/compliance/nvme_compliance.o 00:04:06.713 LINK reserve 00:04:06.713 CXX test/cpp_headers/env.o 00:04:06.971 LINK simple_copy 00:04:06.971 CXX test/cpp_headers/event.o 00:04:06.971 LINK boot_partition 00:04:06.971 LINK connect_stress 00:04:06.971 CXX test/cpp_headers/fd_group.o 00:04:06.971 CXX test/cpp_headers/fd.o 00:04:06.971 LINK nvme_compliance 00:04:06.971 CXX test/cpp_headers/file.o 00:04:07.230 CXX test/cpp_headers/fsdev.o 00:04:07.230 CC test/bdev/bdevio/bdevio.o 00:04:07.230 CXX test/cpp_headers/fsdev_module.o 00:04:07.230 CC examples/nvmf/nvmf/nvmf.o 00:04:07.230 CC test/nvme/fused_ordering/fused_ordering.o 00:04:07.230 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:07.230 CC test/nvme/fdp/fdp.o 00:04:07.230 CXX test/cpp_headers/ftl.o 00:04:07.230 CXX test/cpp_headers/fuse_dispatcher.o 00:04:07.230 CXX test/cpp_headers/gpt_spec.o 00:04:07.488 CC test/nvme/cuse/cuse.o 00:04:07.488 LINK doorbell_aers 00:04:07.488 LINK fused_ordering 00:04:07.488 LINK nvmf 00:04:07.488 CXX test/cpp_headers/hexlify.o 00:04:07.488 CXX test/cpp_headers/histogram_data.o 00:04:07.488 CXX test/cpp_headers/idxd.o 00:04:07.488 CXX test/cpp_headers/idxd_spec.o 00:04:07.488 LINK bdevio 00:04:07.488 CXX test/cpp_headers/init.o 00:04:07.488 LINK fdp 00:04:07.749 CXX test/cpp_headers/ioat.o 00:04:07.749 CXX test/cpp_headers/ioat_spec.o 00:04:07.749 CXX test/cpp_headers/iscsi_spec.o 00:04:07.749 CXX test/cpp_headers/json.o 00:04:07.749 CXX test/cpp_headers/jsonrpc.o 00:04:07.749 CXX test/cpp_headers/keyring.o 00:04:07.749 CXX test/cpp_headers/keyring_module.o 00:04:07.749 CXX test/cpp_headers/likely.o 00:04:07.749 CXX test/cpp_headers/log.o 00:04:07.749 CXX test/cpp_headers/lvol.o 00:04:07.749 CXX test/cpp_headers/md5.o 00:04:08.005 CXX test/cpp_headers/memory.o 00:04:08.005 CXX test/cpp_headers/mmio.o 00:04:08.005 CXX test/cpp_headers/net.o 00:04:08.005 CXX test/cpp_headers/notify.o 00:04:08.005 CXX test/cpp_headers/nbd.o 00:04:08.005 CXX test/cpp_headers/nvme.o 00:04:08.005 CXX test/cpp_headers/nvme_intel.o 00:04:08.005 CXX test/cpp_headers/nvme_ocssd.o 00:04:08.005 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:08.262 CXX test/cpp_headers/nvme_spec.o 00:04:08.262 CXX test/cpp_headers/nvme_zns.o 00:04:08.262 CXX test/cpp_headers/nvmf_cmd.o 00:04:08.262 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:08.262 CXX test/cpp_headers/nvmf.o 00:04:08.262 CXX test/cpp_headers/nvmf_spec.o 00:04:08.262 CXX test/cpp_headers/nvmf_transport.o 00:04:08.262 CXX test/cpp_headers/opal.o 00:04:08.262 CXX test/cpp_headers/opal_spec.o 00:04:08.262 CXX test/cpp_headers/pci_ids.o 00:04:08.262 CXX test/cpp_headers/pipe.o 00:04:08.519 CXX test/cpp_headers/queue.o 00:04:08.519 CXX test/cpp_headers/reduce.o 00:04:08.519 CXX test/cpp_headers/rpc.o 00:04:08.519 CXX test/cpp_headers/scheduler.o 00:04:08.519 CXX test/cpp_headers/scsi.o 00:04:08.519 CXX test/cpp_headers/scsi_spec.o 00:04:08.519 CXX test/cpp_headers/sock.o 00:04:08.519 CXX test/cpp_headers/stdinc.o 00:04:08.519 CXX test/cpp_headers/string.o 00:04:08.519 CXX test/cpp_headers/thread.o 00:04:08.777 CXX test/cpp_headers/trace.o 00:04:08.777 CXX test/cpp_headers/trace_parser.o 00:04:08.777 LINK cuse 00:04:08.777 CXX test/cpp_headers/tree.o 00:04:08.777 CXX test/cpp_headers/ublk.o 00:04:08.777 CXX test/cpp_headers/util.o 00:04:08.777 CXX test/cpp_headers/uuid.o 00:04:08.777 CXX test/cpp_headers/version.o 00:04:08.777 CXX test/cpp_headers/vfio_user_pci.o 00:04:08.777 CXX test/cpp_headers/vfio_user_spec.o 00:04:08.777 CXX test/cpp_headers/vhost.o 00:04:08.777 CXX test/cpp_headers/vmd.o 00:04:08.777 CXX test/cpp_headers/xor.o 00:04:08.777 CXX test/cpp_headers/zipf.o 00:04:11.304 LINK esnap 00:04:11.562 00:04:11.562 real 1m35.874s 00:04:11.562 user 8m35.963s 00:04:11.562 sys 1m50.392s 00:04:11.562 15:52:09 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:11.562 15:52:09 make -- common/autotest_common.sh@10 -- $ set +x 00:04:11.562 ************************************ 00:04:11.562 END TEST make 00:04:11.562 ************************************ 00:04:11.562 15:52:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:11.562 15:52:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:11.562 15:52:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:11.562 15:52:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.562 15:52:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:11.562 15:52:09 -- pm/common@44 -- $ pid=5414 00:04:11.562 15:52:09 -- pm/common@50 -- $ kill -TERM 5414 00:04:11.562 15:52:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.562 15:52:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:11.562 15:52:09 -- pm/common@44 -- $ pid=5416 00:04:11.562 15:52:09 -- pm/common@50 -- $ kill -TERM 5416 00:04:11.562 15:52:09 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:11.562 15:52:09 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:11.821 15:52:09 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:11.821 15:52:09 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:11.821 15:52:09 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:11.821 15:52:09 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:11.821 15:52:09 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.821 15:52:09 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.821 15:52:09 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.821 15:52:09 -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.821 15:52:09 -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.821 15:52:09 -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.821 15:52:09 -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.821 15:52:09 -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.821 15:52:09 -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.821 15:52:09 -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.821 15:52:09 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.821 15:52:09 -- scripts/common.sh@344 -- # case "$op" in 00:04:11.821 15:52:09 -- scripts/common.sh@345 -- # : 1 00:04:11.821 15:52:09 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.821 15:52:09 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.821 15:52:09 -- scripts/common.sh@365 -- # decimal 1 00:04:11.821 15:52:09 -- scripts/common.sh@353 -- # local d=1 00:04:11.821 15:52:09 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.821 15:52:09 -- scripts/common.sh@355 -- # echo 1 00:04:11.821 15:52:09 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.821 15:52:09 -- scripts/common.sh@366 -- # decimal 2 00:04:11.821 15:52:09 -- scripts/common.sh@353 -- # local d=2 00:04:11.821 15:52:09 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.821 15:52:09 -- scripts/common.sh@355 -- # echo 2 00:04:11.821 15:52:09 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.821 15:52:09 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.821 15:52:09 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.821 15:52:09 -- scripts/common.sh@368 -- # return 0 00:04:11.821 15:52:09 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.821 15:52:09 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:11.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.821 --rc genhtml_branch_coverage=1 00:04:11.821 --rc genhtml_function_coverage=1 00:04:11.821 --rc genhtml_legend=1 00:04:11.821 --rc geninfo_all_blocks=1 00:04:11.821 --rc geninfo_unexecuted_blocks=1 00:04:11.821 00:04:11.821 ' 00:04:11.821 15:52:09 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:11.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.821 --rc genhtml_branch_coverage=1 00:04:11.821 --rc genhtml_function_coverage=1 00:04:11.821 --rc genhtml_legend=1 00:04:11.821 --rc geninfo_all_blocks=1 00:04:11.821 --rc geninfo_unexecuted_blocks=1 00:04:11.821 00:04:11.821 ' 00:04:11.821 15:52:09 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:11.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.821 --rc genhtml_branch_coverage=1 00:04:11.821 --rc genhtml_function_coverage=1 00:04:11.821 --rc genhtml_legend=1 00:04:11.821 --rc geninfo_all_blocks=1 00:04:11.821 --rc geninfo_unexecuted_blocks=1 00:04:11.821 00:04:11.821 ' 00:04:11.821 15:52:09 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:11.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.821 --rc genhtml_branch_coverage=1 00:04:11.821 --rc genhtml_function_coverage=1 00:04:11.821 --rc genhtml_legend=1 00:04:11.821 --rc geninfo_all_blocks=1 00:04:11.821 --rc geninfo_unexecuted_blocks=1 00:04:11.821 00:04:11.821 ' 00:04:11.821 15:52:09 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:11.821 15:52:09 -- nvmf/common.sh@7 -- # uname -s 00:04:11.821 15:52:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.821 15:52:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.821 15:52:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.821 15:52:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.821 15:52:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.821 15:52:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.821 15:52:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.821 15:52:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.821 15:52:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.821 15:52:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.821 15:52:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:04:11.821 15:52:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:04:11.821 15:52:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.821 15:52:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.821 15:52:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:11.821 15:52:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:11.821 15:52:09 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:11.821 15:52:09 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:11.821 15:52:09 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.821 15:52:09 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.821 15:52:09 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.821 15:52:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.822 15:52:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.822 15:52:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.822 15:52:09 -- paths/export.sh@5 -- # export PATH 00:04:11.822 15:52:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:11.822 15:52:09 -- nvmf/common.sh@51 -- # : 0 00:04:11.822 15:52:09 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:11.822 15:52:09 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:11.822 15:52:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:11.822 15:52:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.822 15:52:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.822 15:52:09 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:11.822 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:11.822 15:52:09 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:11.822 15:52:09 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:11.822 15:52:09 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:11.822 15:52:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:11.822 15:52:10 -- spdk/autotest.sh@32 -- # uname -s 00:04:11.822 15:52:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:11.822 15:52:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:11.822 15:52:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:11.822 15:52:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:11.822 15:52:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:11.822 15:52:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:11.822 15:52:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:11.822 15:52:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:11.822 15:52:10 -- spdk/autotest.sh@48 -- # udevadm_pid=54583 00:04:11.822 15:52:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:11.822 15:52:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:11.822 15:52:10 -- pm/common@17 -- # local monitor 00:04:11.822 15:52:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.822 15:52:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.822 15:52:10 -- pm/common@25 -- # sleep 1 00:04:11.822 15:52:10 -- pm/common@21 -- # date +%s 00:04:11.822 15:52:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732117930 00:04:11.822 15:52:10 -- pm/common@21 -- # date +%s 00:04:12.080 15:52:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732117930 00:04:12.080 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732117930_collect-cpu-load.pm.log 00:04:12.080 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732117930_collect-vmstat.pm.log 00:04:13.018 15:52:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:13.018 15:52:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:13.018 15:52:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:13.018 15:52:11 -- common/autotest_common.sh@10 -- # set +x 00:04:13.018 15:52:11 -- spdk/autotest.sh@59 -- # create_test_list 00:04:13.018 15:52:11 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:13.018 15:52:11 -- common/autotest_common.sh@10 -- # set +x 00:04:13.018 15:52:11 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:13.018 15:52:11 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:13.018 15:52:11 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:13.018 15:52:11 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:13.018 15:52:11 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:13.018 15:52:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:13.018 15:52:11 -- common/autotest_common.sh@1457 -- # uname 00:04:13.018 15:52:11 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:13.018 15:52:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:13.018 15:52:11 -- common/autotest_common.sh@1477 -- # uname 00:04:13.018 15:52:11 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:13.018 15:52:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:13.018 15:52:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:13.018 lcov: LCOV version 1.15 00:04:13.018 15:52:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:31.195 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:31.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:49.272 15:52:44 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:49.272 15:52:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.272 15:52:44 -- common/autotest_common.sh@10 -- # set +x 00:04:49.272 15:52:44 -- spdk/autotest.sh@78 -- # rm -f 00:04:49.272 15:52:44 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.272 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:49.272 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:49.272 15:52:45 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:49.272 15:52:45 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:49.272 15:52:45 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:49.272 15:52:45 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:49.272 15:52:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:49.272 15:52:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:49.272 15:52:45 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:49.272 15:52:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.272 15:52:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:49.272 15:52:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:49.272 15:52:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:49.272 15:52:45 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:49.272 15:52:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:49.272 15:52:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:49.272 15:52:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:49.272 15:52:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:49.272 15:52:45 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:49.272 15:52:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:49.272 15:52:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:49.272 15:52:45 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:49.272 15:52:45 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:49.272 15:52:45 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:49.272 15:52:45 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:49.272 15:52:45 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:49.272 15:52:45 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:49.272 15:52:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.272 15:52:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:49.272 15:52:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:49.272 15:52:45 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:49.272 15:52:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:49.272 No valid GPT data, bailing 00:04:49.272 15:52:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:49.272 15:52:45 -- scripts/common.sh@394 -- # pt= 00:04:49.272 15:52:45 -- scripts/common.sh@395 -- # return 1 00:04:49.272 15:52:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:49.272 1+0 records in 00:04:49.272 1+0 records out 00:04:49.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00539402 s, 194 MB/s 00:04:49.272 15:52:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.272 15:52:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:49.272 15:52:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:49.272 15:52:45 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:49.272 15:52:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:49.272 No valid GPT data, bailing 00:04:49.272 15:52:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:49.272 15:52:45 -- scripts/common.sh@394 -- # pt= 00:04:49.272 15:52:45 -- scripts/common.sh@395 -- # return 1 00:04:49.272 15:52:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:49.272 1+0 records in 00:04:49.272 1+0 records out 00:04:49.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436642 s, 240 MB/s 00:04:49.272 15:52:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.272 15:52:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:49.272 15:52:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:49.272 15:52:45 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:49.272 15:52:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:49.272 No valid GPT data, bailing 00:04:49.272 15:52:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:49.272 15:52:45 -- scripts/common.sh@394 -- # pt= 00:04:49.272 15:52:45 -- scripts/common.sh@395 -- # return 1 00:04:49.272 15:52:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:49.272 1+0 records in 00:04:49.272 1+0 records out 00:04:49.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00563589 s, 186 MB/s 00:04:49.273 15:52:45 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.273 15:52:45 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:49.273 15:52:45 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:49.273 15:52:45 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:49.273 15:52:45 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:49.273 No valid GPT data, bailing 00:04:49.273 15:52:45 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:49.273 15:52:45 -- scripts/common.sh@394 -- # pt= 00:04:49.273 15:52:45 -- scripts/common.sh@395 -- # return 1 00:04:49.273 15:52:45 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:49.273 1+0 records in 00:04:49.273 1+0 records out 00:04:49.273 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00521032 s, 201 MB/s 00:04:49.273 15:52:45 -- spdk/autotest.sh@105 -- # sync 00:04:49.273 15:52:46 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:49.273 15:52:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:49.273 15:52:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:50.210 15:52:48 -- spdk/autotest.sh@111 -- # uname -s 00:04:50.210 15:52:48 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:50.210 15:52:48 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:50.210 15:52:48 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:50.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.776 Hugepages 00:04:50.776 node hugesize free / total 00:04:50.776 node0 1048576kB 0 / 0 00:04:50.776 node0 2048kB 0 / 0 00:04:50.776 00:04:50.776 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:50.776 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:50.776 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:51.035 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:51.035 15:52:49 -- spdk/autotest.sh@117 -- # uname -s 00:04:51.035 15:52:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:51.035 15:52:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:51.035 15:52:49 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:51.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.603 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.861 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:51.861 15:52:49 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:52.798 15:52:50 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:52.798 15:52:50 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:52.798 15:52:50 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:52.798 15:52:50 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:52.798 15:52:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:52.798 15:52:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:52.798 15:52:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.798 15:52:50 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:52.798 15:52:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:52.798 15:52:50 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:52.798 15:52:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:52.798 15:52:50 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:53.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.364 Waiting for block devices as requested 00:04:53.365 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:53.365 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:53.365 15:52:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:53.365 15:52:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:53.365 15:52:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:53.365 15:52:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:53.365 15:52:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:53.365 15:52:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:53.365 15:52:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:53.623 15:52:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:53.623 15:52:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:53.623 15:52:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:53.623 15:52:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:53.623 15:52:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:53.623 15:52:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:53.623 15:52:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:53.623 15:52:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:53.623 15:52:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:53.623 15:52:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:53.623 15:52:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:53.623 15:52:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:53.623 15:52:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:53.623 15:52:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:53.623 15:52:51 -- common/autotest_common.sh@1543 -- # continue 00:04:53.623 15:52:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:53.624 15:52:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:53.624 15:52:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:53.624 15:52:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:53.624 15:52:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:53.624 15:52:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:53.624 15:52:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:53.624 15:52:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:53.624 15:52:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:53.624 15:52:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:53.624 15:52:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:53.624 15:52:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:53.624 15:52:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:53.624 15:52:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:53.624 15:52:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:53.624 15:52:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:53.624 15:52:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:53.624 15:52:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:53.624 15:52:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:53.624 15:52:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:53.624 15:52:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:53.624 15:52:51 -- common/autotest_common.sh@1543 -- # continue 00:04:53.624 15:52:51 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:53.624 15:52:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.624 15:52:51 -- common/autotest_common.sh@10 -- # set +x 00:04:53.624 15:52:51 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:53.624 15:52:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.624 15:52:51 -- common/autotest_common.sh@10 -- # set +x 00:04:53.624 15:52:51 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:54.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:54.449 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.449 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.449 15:52:52 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:54.449 15:52:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.449 15:52:52 -- common/autotest_common.sh@10 -- # set +x 00:04:54.449 15:52:52 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:54.449 15:52:52 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:54.449 15:52:52 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:54.449 15:52:52 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:54.449 15:52:52 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:54.449 15:52:52 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:54.449 15:52:52 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:54.449 15:52:52 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:54.449 15:52:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:54.449 15:52:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:54.449 15:52:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.449 15:52:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:54.449 15:52:52 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:54.707 15:52:52 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:54.707 15:52:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:54.707 15:52:52 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:54.707 15:52:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:54.707 15:52:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:54.707 15:52:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.707 15:52:52 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:54.707 15:52:52 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:54.707 15:52:52 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:54.707 15:52:52 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:54.707 15:52:52 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:54.707 15:52:52 -- common/autotest_common.sh@1572 -- # return 0 00:04:54.707 15:52:52 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:54.707 15:52:52 -- common/autotest_common.sh@1580 -- # return 0 00:04:54.707 15:52:52 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:54.707 15:52:52 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:54.707 15:52:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:54.707 15:52:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:54.707 15:52:52 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:54.707 15:52:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.707 15:52:52 -- common/autotest_common.sh@10 -- # set +x 00:04:54.707 15:52:52 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:54.707 15:52:52 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:54.707 15:52:52 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:54.707 15:52:52 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:54.707 15:52:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.707 15:52:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.707 15:52:52 -- common/autotest_common.sh@10 -- # set +x 00:04:54.707 ************************************ 00:04:54.707 START TEST env 00:04:54.707 ************************************ 00:04:54.707 15:52:52 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:54.707 * Looking for test storage... 00:04:54.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:54.707 15:52:52 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.707 15:52:52 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.707 15:52:52 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.966 15:52:52 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.966 15:52:52 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.966 15:52:52 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.966 15:52:52 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.966 15:52:52 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.966 15:52:52 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.966 15:52:52 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.966 15:52:52 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.966 15:52:52 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.966 15:52:52 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.966 15:52:52 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.966 15:52:52 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.966 15:52:52 env -- scripts/common.sh@344 -- # case "$op" in 00:04:54.966 15:52:52 env -- scripts/common.sh@345 -- # : 1 00:04:54.966 15:52:52 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.966 15:52:52 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.966 15:52:52 env -- scripts/common.sh@365 -- # decimal 1 00:04:54.966 15:52:52 env -- scripts/common.sh@353 -- # local d=1 00:04:54.966 15:52:52 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.966 15:52:52 env -- scripts/common.sh@355 -- # echo 1 00:04:54.966 15:52:52 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.966 15:52:52 env -- scripts/common.sh@366 -- # decimal 2 00:04:54.966 15:52:52 env -- scripts/common.sh@353 -- # local d=2 00:04:54.966 15:52:52 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.966 15:52:52 env -- scripts/common.sh@355 -- # echo 2 00:04:54.966 15:52:52 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.966 15:52:52 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.966 15:52:52 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.966 15:52:52 env -- scripts/common.sh@368 -- # return 0 00:04:54.966 15:52:52 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.966 15:52:52 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.966 --rc genhtml_branch_coverage=1 00:04:54.966 --rc genhtml_function_coverage=1 00:04:54.966 --rc genhtml_legend=1 00:04:54.966 --rc geninfo_all_blocks=1 00:04:54.966 --rc geninfo_unexecuted_blocks=1 00:04:54.966 00:04:54.966 ' 00:04:54.966 15:52:52 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.966 --rc genhtml_branch_coverage=1 00:04:54.966 --rc genhtml_function_coverage=1 00:04:54.966 --rc genhtml_legend=1 00:04:54.966 --rc geninfo_all_blocks=1 00:04:54.966 --rc geninfo_unexecuted_blocks=1 00:04:54.966 00:04:54.966 ' 00:04:54.966 15:52:52 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.966 --rc genhtml_branch_coverage=1 00:04:54.966 --rc genhtml_function_coverage=1 00:04:54.966 --rc genhtml_legend=1 00:04:54.966 --rc geninfo_all_blocks=1 00:04:54.966 --rc geninfo_unexecuted_blocks=1 00:04:54.966 00:04:54.966 ' 00:04:54.966 15:52:52 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.966 --rc genhtml_branch_coverage=1 00:04:54.966 --rc genhtml_function_coverage=1 00:04:54.966 --rc genhtml_legend=1 00:04:54.966 --rc geninfo_all_blocks=1 00:04:54.966 --rc geninfo_unexecuted_blocks=1 00:04:54.966 00:04:54.966 ' 00:04:54.966 15:52:52 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:54.966 15:52:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.966 15:52:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.966 15:52:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.966 ************************************ 00:04:54.966 START TEST env_memory 00:04:54.966 ************************************ 00:04:54.966 15:52:52 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:54.966 00:04:54.966 00:04:54.966 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.966 http://cunit.sourceforge.net/ 00:04:54.966 00:04:54.966 00:04:54.966 Suite: memory 00:04:54.966 Test: alloc and free memory map ...[2024-11-20 15:52:53.034448] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:54.966 passed 00:04:54.966 Test: mem map translation ...[2024-11-20 15:52:53.065569] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:54.966 [2024-11-20 15:52:53.065625] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:54.966 [2024-11-20 15:52:53.065691] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:54.966 [2024-11-20 15:52:53.065702] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:54.966 passed 00:04:54.966 Test: mem map registration ...[2024-11-20 15:52:53.129364] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:54.966 [2024-11-20 15:52:53.129474] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:54.966 passed 00:04:54.966 Test: mem map adjacent registrations ...passed 00:04:54.966 00:04:54.966 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.966 suites 1 1 n/a 0 0 00:04:54.966 tests 4 4 4 0 0 00:04:54.967 asserts 152 152 152 0 n/a 00:04:54.967 00:04:54.967 Elapsed time = 0.204 seconds 00:04:54.967 00:04:54.967 real 0m0.221s 00:04:54.967 user 0m0.206s 00:04:54.967 sys 0m0.011s 00:04:54.967 15:52:53 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.967 15:52:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:54.967 ************************************ 00:04:54.967 END TEST env_memory 00:04:54.967 ************************************ 00:04:55.225 15:52:53 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:55.225 15:52:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.225 15:52:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.225 15:52:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.225 ************************************ 00:04:55.225 START TEST env_vtophys 00:04:55.225 ************************************ 00:04:55.225 15:52:53 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:55.225 EAL: lib.eal log level changed from notice to debug 00:04:55.225 EAL: Detected lcore 0 as core 0 on socket 0 00:04:55.225 EAL: Detected lcore 1 as core 0 on socket 0 00:04:55.225 EAL: Detected lcore 2 as core 0 on socket 0 00:04:55.225 EAL: Detected lcore 3 as core 0 on socket 0 00:04:55.225 EAL: Detected lcore 4 as core 0 on socket 0 00:04:55.225 EAL: Detected lcore 5 as core 0 on socket 0 00:04:55.225 EAL: Detected lcore 6 as core 0 on socket 0 00:04:55.225 EAL: Detected lcore 7 as core 0 on socket 0 00:04:55.225 EAL: Detected lcore 8 as core 0 on socket 0 00:04:55.225 EAL: Detected lcore 9 as core 0 on socket 0 00:04:55.225 EAL: Maximum logical cores by configuration: 128 00:04:55.225 EAL: Detected CPU lcores: 10 00:04:55.225 EAL: Detected NUMA nodes: 1 00:04:55.225 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:55.225 EAL: Detected shared linkage of DPDK 00:04:55.225 EAL: No shared files mode enabled, IPC will be disabled 00:04:55.225 EAL: Selected IOVA mode 'PA' 00:04:55.225 EAL: Probing VFIO support... 00:04:55.225 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:55.225 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:55.225 EAL: Ask a virtual area of 0x2e000 bytes 00:04:55.225 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:55.225 EAL: Setting up physically contiguous memory... 00:04:55.225 EAL: Setting maximum number of open files to 524288 00:04:55.225 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:55.225 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:55.225 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.225 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:55.225 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.225 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.225 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:55.225 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:55.225 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.225 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:55.225 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.225 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.225 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:55.225 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:55.225 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.225 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:55.225 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.225 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.225 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:55.225 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:55.225 EAL: Ask a virtual area of 0x61000 bytes 00:04:55.225 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:55.225 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:55.225 EAL: Ask a virtual area of 0x400000000 bytes 00:04:55.225 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:55.225 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:55.225 EAL: Hugepages will be freed exactly as allocated. 00:04:55.225 EAL: No shared files mode enabled, IPC is disabled 00:04:55.225 EAL: No shared files mode enabled, IPC is disabled 00:04:55.225 EAL: TSC frequency is ~2200000 KHz 00:04:55.225 EAL: Main lcore 0 is ready (tid=7f43941eea00;cpuset=[0]) 00:04:55.225 EAL: Trying to obtain current memory policy. 00:04:55.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.226 EAL: Restoring previous memory policy: 0 00:04:55.226 EAL: request: mp_malloc_sync 00:04:55.226 EAL: No shared files mode enabled, IPC is disabled 00:04:55.226 EAL: Heap on socket 0 was expanded by 2MB 00:04:55.226 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:55.226 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:55.226 EAL: Mem event callback 'spdk:(nil)' registered 00:04:55.226 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:55.226 00:04:55.226 00:04:55.226 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.226 http://cunit.sourceforge.net/ 00:04:55.226 00:04:55.226 00:04:55.226 Suite: components_suite 00:04:55.226 Test: vtophys_malloc_test ...passed 00:04:55.226 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:55.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.226 EAL: Restoring previous memory policy: 4 00:04:55.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.226 EAL: request: mp_malloc_sync 00:04:55.226 EAL: No shared files mode enabled, IPC is disabled 00:04:55.226 EAL: Heap on socket 0 was expanded by 4MB 00:04:55.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.226 EAL: request: mp_malloc_sync 00:04:55.226 EAL: No shared files mode enabled, IPC is disabled 00:04:55.226 EAL: Heap on socket 0 was shrunk by 4MB 00:04:55.226 EAL: Trying to obtain current memory policy. 00:04:55.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.226 EAL: Restoring previous memory policy: 4 00:04:55.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.226 EAL: request: mp_malloc_sync 00:04:55.226 EAL: No shared files mode enabled, IPC is disabled 00:04:55.226 EAL: Heap on socket 0 was expanded by 6MB 00:04:55.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.226 EAL: request: mp_malloc_sync 00:04:55.226 EAL: No shared files mode enabled, IPC is disabled 00:04:55.226 EAL: Heap on socket 0 was shrunk by 6MB 00:04:55.226 EAL: Trying to obtain current memory policy. 00:04:55.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.226 EAL: Restoring previous memory policy: 4 00:04:55.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.226 EAL: request: mp_malloc_sync 00:04:55.226 EAL: No shared files mode enabled, IPC is disabled 00:04:55.226 EAL: Heap on socket 0 was expanded by 10MB 00:04:55.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.226 EAL: request: mp_malloc_sync 00:04:55.226 EAL: No shared files mode enabled, IPC is disabled 00:04:55.226 EAL: Heap on socket 0 was shrunk by 10MB 00:04:55.226 EAL: Trying to obtain current memory policy. 00:04:55.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.226 EAL: Restoring previous memory policy: 4 00:04:55.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.226 EAL: request: mp_malloc_sync 00:04:55.226 EAL: No shared files mode enabled, IPC is disabled 00:04:55.226 EAL: Heap on socket 0 was expanded by 18MB 00:04:55.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.226 EAL: request: mp_malloc_sync 00:04:55.226 EAL: No shared files mode enabled, IPC is disabled 00:04:55.226 EAL: Heap on socket 0 was shrunk by 18MB 00:04:55.226 EAL: Trying to obtain current memory policy. 00:04:55.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.226 EAL: Restoring previous memory policy: 4 00:04:55.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.226 EAL: request: mp_malloc_sync 00:04:55.226 EAL: No shared files mode enabled, IPC is disabled 00:04:55.226 EAL: Heap on socket 0 was expanded by 34MB 00:04:55.226 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.226 EAL: request: mp_malloc_sync 00:04:55.226 EAL: No shared files mode enabled, IPC is disabled 00:04:55.226 EAL: Heap on socket 0 was shrunk by 34MB 00:04:55.226 EAL: Trying to obtain current memory policy. 00:04:55.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.485 EAL: Restoring previous memory policy: 4 00:04:55.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.485 EAL: request: mp_malloc_sync 00:04:55.485 EAL: No shared files mode enabled, IPC is disabled 00:04:55.485 EAL: Heap on socket 0 was expanded by 66MB 00:04:55.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.485 EAL: request: mp_malloc_sync 00:04:55.485 EAL: No shared files mode enabled, IPC is disabled 00:04:55.485 EAL: Heap on socket 0 was shrunk by 66MB 00:04:55.485 EAL: Trying to obtain current memory policy. 00:04:55.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.485 EAL: Restoring previous memory policy: 4 00:04:55.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.485 EAL: request: mp_malloc_sync 00:04:55.485 EAL: No shared files mode enabled, IPC is disabled 00:04:55.485 EAL: Heap on socket 0 was expanded by 130MB 00:04:55.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.485 EAL: request: mp_malloc_sync 00:04:55.485 EAL: No shared files mode enabled, IPC is disabled 00:04:55.485 EAL: Heap on socket 0 was shrunk by 130MB 00:04:55.485 EAL: Trying to obtain current memory policy. 00:04:55.485 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.485 EAL: Restoring previous memory policy: 4 00:04:55.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.485 EAL: request: mp_malloc_sync 00:04:55.485 EAL: No shared files mode enabled, IPC is disabled 00:04:55.485 EAL: Heap on socket 0 was expanded by 258MB 00:04:55.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.854 EAL: request: mp_malloc_sync 00:04:55.854 EAL: No shared files mode enabled, IPC is disabled 00:04:55.854 EAL: Heap on socket 0 was shrunk by 258MB 00:04:55.854 EAL: Trying to obtain current memory policy. 00:04:55.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.854 EAL: Restoring previous memory policy: 4 00:04:55.854 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.854 EAL: request: mp_malloc_sync 00:04:55.854 EAL: No shared files mode enabled, IPC is disabled 00:04:55.854 EAL: Heap on socket 0 was expanded by 514MB 00:04:55.854 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.112 EAL: request: mp_malloc_sync 00:04:56.112 EAL: No shared files mode enabled, IPC is disabled 00:04:56.112 EAL: Heap on socket 0 was shrunk by 514MB 00:04:56.112 EAL: Trying to obtain current memory policy. 00:04:56.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:56.370 EAL: Restoring previous memory policy: 4 00:04:56.370 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.370 EAL: request: mp_malloc_sync 00:04:56.370 EAL: No shared files mode enabled, IPC is disabled 00:04:56.370 EAL: Heap on socket 0 was expanded by 1026MB 00:04:56.370 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.630 passed 00:04:56.630 00:04:56.630 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.630 suites 1 1 n/a 0 0 00:04:56.630 tests 2 2 2 0 0 00:04:56.630 asserts 5463 5463 5463 0 n/a 00:04:56.630 00:04:56.630 Elapsed time = 1.309 seconds 00:04:56.630 EAL: request: mp_malloc_sync 00:04:56.630 EAL: No shared files mode enabled, IPC is disabled 00:04:56.630 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:56.630 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.630 EAL: request: mp_malloc_sync 00:04:56.630 EAL: No shared files mode enabled, IPC is disabled 00:04:56.630 EAL: Heap on socket 0 was shrunk by 2MB 00:04:56.630 EAL: No shared files mode enabled, IPC is disabled 00:04:56.630 EAL: No shared files mode enabled, IPC is disabled 00:04:56.630 EAL: No shared files mode enabled, IPC is disabled 00:04:56.630 00:04:56.630 real 0m1.527s 00:04:56.630 user 0m0.846s 00:04:56.630 sys 0m0.543s 00:04:56.630 15:52:54 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.630 ************************************ 00:04:56.630 END TEST env_vtophys 00:04:56.630 ************************************ 00:04:56.630 15:52:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:56.630 15:52:54 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:56.630 15:52:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.630 15:52:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.630 15:52:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.630 ************************************ 00:04:56.630 START TEST env_pci 00:04:56.630 ************************************ 00:04:56.630 15:52:54 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:56.630 00:04:56.630 00:04:56.630 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.630 http://cunit.sourceforge.net/ 00:04:56.630 00:04:56.630 00:04:56.630 Suite: pci 00:04:56.630 Test: pci_hook ...[2024-11-20 15:52:54.850693] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56840 has claimed it 00:04:56.630 passed 00:04:56.630 00:04:56.630 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.630 suites 1 1 n/a 0 0 00:04:56.630 tests 1 1 1 0 0 00:04:56.630 asserts 25 25 25 0 n/a 00:04:56.630 00:04:56.630 Elapsed time = 0.002 seconds 00:04:56.630 EAL: Cannot find device (10000:00:01.0) 00:04:56.630 EAL: Failed to attach device on primary process 00:04:56.630 00:04:56.630 real 0m0.020s 00:04:56.630 user 0m0.010s 00:04:56.630 sys 0m0.009s 00:04:56.630 15:52:54 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.630 15:52:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:56.630 ************************************ 00:04:56.630 END TEST env_pci 00:04:56.630 ************************************ 00:04:56.888 15:52:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:56.888 15:52:54 env -- env/env.sh@15 -- # uname 00:04:56.888 15:52:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:56.888 15:52:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:56.888 15:52:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.888 15:52:54 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:56.888 15:52:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.888 15:52:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.888 ************************************ 00:04:56.888 START TEST env_dpdk_post_init 00:04:56.888 ************************************ 00:04:56.888 15:52:54 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.888 EAL: Detected CPU lcores: 10 00:04:56.888 EAL: Detected NUMA nodes: 1 00:04:56.888 EAL: Detected shared linkage of DPDK 00:04:56.888 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.888 EAL: Selected IOVA mode 'PA' 00:04:56.888 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.888 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:56.888 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:56.888 Starting DPDK initialization... 00:04:56.888 Starting SPDK post initialization... 00:04:56.888 SPDK NVMe probe 00:04:56.888 Attaching to 0000:00:10.0 00:04:56.888 Attaching to 0000:00:11.0 00:04:56.888 Attached to 0000:00:10.0 00:04:56.888 Attached to 0000:00:11.0 00:04:56.888 Cleaning up... 00:04:56.888 00:04:56.888 real 0m0.180s 00:04:56.888 user 0m0.046s 00:04:56.888 sys 0m0.035s 00:04:56.888 15:52:55 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.888 15:52:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.888 ************************************ 00:04:56.888 END TEST env_dpdk_post_init 00:04:56.888 ************************************ 00:04:56.888 15:52:55 env -- env/env.sh@26 -- # uname 00:04:57.147 15:52:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:57.147 15:52:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:57.147 15:52:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.147 15:52:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.147 15:52:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.147 ************************************ 00:04:57.147 START TEST env_mem_callbacks 00:04:57.147 ************************************ 00:04:57.147 15:52:55 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:57.147 EAL: Detected CPU lcores: 10 00:04:57.147 EAL: Detected NUMA nodes: 1 00:04:57.147 EAL: Detected shared linkage of DPDK 00:04:57.147 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:57.147 EAL: Selected IOVA mode 'PA' 00:04:57.147 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:57.147 00:04:57.147 00:04:57.147 CUnit - A unit testing framework for C - Version 2.1-3 00:04:57.147 http://cunit.sourceforge.net/ 00:04:57.147 00:04:57.147 00:04:57.147 Suite: memory 00:04:57.147 Test: test ... 00:04:57.147 register 0x200000200000 2097152 00:04:57.147 malloc 3145728 00:04:57.147 register 0x200000400000 4194304 00:04:57.147 buf 0x200000500000 len 3145728 PASSED 00:04:57.147 malloc 64 00:04:57.147 buf 0x2000004fff40 len 64 PASSED 00:04:57.147 malloc 4194304 00:04:57.147 register 0x200000800000 6291456 00:04:57.147 buf 0x200000a00000 len 4194304 PASSED 00:04:57.147 free 0x200000500000 3145728 00:04:57.147 free 0x2000004fff40 64 00:04:57.147 unregister 0x200000400000 4194304 PASSED 00:04:57.147 free 0x200000a00000 4194304 00:04:57.147 unregister 0x200000800000 6291456 PASSED 00:04:57.147 malloc 8388608 00:04:57.147 register 0x200000400000 10485760 00:04:57.147 buf 0x200000600000 len 8388608 PASSED 00:04:57.147 free 0x200000600000 8388608 00:04:57.147 unregister 0x200000400000 10485760 PASSED 00:04:57.147 passed 00:04:57.147 00:04:57.147 Run Summary: Type Total Ran Passed Failed Inactive 00:04:57.147 suites 1 1 n/a 0 0 00:04:57.147 tests 1 1 1 0 0 00:04:57.147 asserts 15 15 15 0 n/a 00:04:57.147 00:04:57.147 Elapsed time = 0.010 seconds 00:04:57.147 00:04:57.147 real 0m0.147s 00:04:57.147 user 0m0.021s 00:04:57.147 sys 0m0.025s 00:04:57.147 15:52:55 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.147 15:52:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:57.147 ************************************ 00:04:57.147 END TEST env_mem_callbacks 00:04:57.147 ************************************ 00:04:57.147 00:04:57.147 real 0m2.559s 00:04:57.147 user 0m1.346s 00:04:57.147 sys 0m0.869s 00:04:57.147 15:52:55 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.147 ************************************ 00:04:57.147 END TEST env 00:04:57.147 ************************************ 00:04:57.147 15:52:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:57.147 15:52:55 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:57.147 15:52:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.147 15:52:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.147 15:52:55 -- common/autotest_common.sh@10 -- # set +x 00:04:57.147 ************************************ 00:04:57.147 START TEST rpc 00:04:57.147 ************************************ 00:04:57.147 15:52:55 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:57.405 * Looking for test storage... 00:04:57.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.405 15:52:55 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.405 15:52:55 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.405 15:52:55 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.405 15:52:55 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.405 15:52:55 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.405 15:52:55 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.405 15:52:55 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.405 15:52:55 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.405 15:52:55 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.405 15:52:55 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.405 15:52:55 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.405 15:52:55 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.405 15:52:55 rpc -- scripts/common.sh@345 -- # : 1 00:04:57.405 15:52:55 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.405 15:52:55 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.405 15:52:55 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.405 15:52:55 rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.405 15:52:55 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.405 15:52:55 rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.405 15:52:55 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.405 15:52:55 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.405 15:52:55 rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.405 15:52:55 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.405 15:52:55 rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.405 15:52:55 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.405 15:52:55 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.405 15:52:55 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.405 15:52:55 rpc -- scripts/common.sh@368 -- # return 0 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.405 --rc genhtml_branch_coverage=1 00:04:57.405 --rc genhtml_function_coverage=1 00:04:57.405 --rc genhtml_legend=1 00:04:57.405 --rc geninfo_all_blocks=1 00:04:57.405 --rc geninfo_unexecuted_blocks=1 00:04:57.405 00:04:57.405 ' 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.405 --rc genhtml_branch_coverage=1 00:04:57.405 --rc genhtml_function_coverage=1 00:04:57.405 --rc genhtml_legend=1 00:04:57.405 --rc geninfo_all_blocks=1 00:04:57.405 --rc geninfo_unexecuted_blocks=1 00:04:57.405 00:04:57.405 ' 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.405 --rc genhtml_branch_coverage=1 00:04:57.405 --rc genhtml_function_coverage=1 00:04:57.405 --rc genhtml_legend=1 00:04:57.405 --rc geninfo_all_blocks=1 00:04:57.405 --rc geninfo_unexecuted_blocks=1 00:04:57.405 00:04:57.405 ' 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.405 --rc genhtml_branch_coverage=1 00:04:57.405 --rc genhtml_function_coverage=1 00:04:57.405 --rc genhtml_legend=1 00:04:57.405 --rc geninfo_all_blocks=1 00:04:57.405 --rc geninfo_unexecuted_blocks=1 00:04:57.405 00:04:57.405 ' 00:04:57.405 15:52:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56958 00:04:57.405 15:52:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.405 15:52:55 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:57.405 15:52:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56958 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@835 -- # '[' -z 56958 ']' 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.405 15:52:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.405 [2024-11-20 15:52:55.631464] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:04:57.405 [2024-11-20 15:52:55.631601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56958 ] 00:04:57.663 [2024-11-20 15:52:55.774809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.663 [2024-11-20 15:52:55.826375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:57.663 [2024-11-20 15:52:55.826457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56958' to capture a snapshot of events at runtime. 00:04:57.663 [2024-11-20 15:52:55.826483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.663 [2024-11-20 15:52:55.826491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.664 [2024-11-20 15:52:55.826498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56958 for offline analysis/debug. 00:04:57.664 [2024-11-20 15:52:55.826974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.664 [2024-11-20 15:52:55.898722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:58.600 15:52:56 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.600 15:52:56 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:58.600 15:52:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.600 15:52:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.600 15:52:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:58.600 15:52:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:58.600 15:52:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.600 15:52:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.600 15:52:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.600 ************************************ 00:04:58.600 START TEST rpc_integrity 00:04:58.600 ************************************ 00:04:58.600 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:58.600 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.600 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.600 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.600 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.600 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.600 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.600 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.600 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.601 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.601 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.601 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.601 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:58.601 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.601 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.601 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.601 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.601 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.601 { 00:04:58.601 "name": "Malloc0", 00:04:58.601 "aliases": [ 00:04:58.601 "ed44f1ef-f0f1-4996-8506-2bcc199a1812" 00:04:58.601 ], 00:04:58.601 "product_name": "Malloc disk", 00:04:58.601 "block_size": 512, 00:04:58.601 "num_blocks": 16384, 00:04:58.601 "uuid": "ed44f1ef-f0f1-4996-8506-2bcc199a1812", 00:04:58.601 "assigned_rate_limits": { 00:04:58.601 "rw_ios_per_sec": 0, 00:04:58.601 "rw_mbytes_per_sec": 0, 00:04:58.601 "r_mbytes_per_sec": 0, 00:04:58.601 "w_mbytes_per_sec": 0 00:04:58.601 }, 00:04:58.601 "claimed": false, 00:04:58.601 "zoned": false, 00:04:58.601 "supported_io_types": { 00:04:58.601 "read": true, 00:04:58.601 "write": true, 00:04:58.601 "unmap": true, 00:04:58.601 "flush": true, 00:04:58.601 "reset": true, 00:04:58.601 "nvme_admin": false, 00:04:58.601 "nvme_io": false, 00:04:58.601 "nvme_io_md": false, 00:04:58.601 "write_zeroes": true, 00:04:58.601 "zcopy": true, 00:04:58.601 "get_zone_info": false, 00:04:58.601 "zone_management": false, 00:04:58.601 "zone_append": false, 00:04:58.601 "compare": false, 00:04:58.601 "compare_and_write": false, 00:04:58.601 "abort": true, 00:04:58.601 "seek_hole": false, 00:04:58.601 "seek_data": false, 00:04:58.601 "copy": true, 00:04:58.601 "nvme_iov_md": false 00:04:58.601 }, 00:04:58.601 "memory_domains": [ 00:04:58.601 { 00:04:58.601 "dma_device_id": "system", 00:04:58.601 "dma_device_type": 1 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.601 "dma_device_type": 2 00:04:58.601 } 00:04:58.601 ], 00:04:58.601 "driver_specific": {} 00:04:58.601 } 00:04:58.601 ]' 00:04:58.601 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.601 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.601 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:58.601 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.601 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.601 [2024-11-20 15:52:56.791597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:58.601 [2024-11-20 15:52:56.791643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.601 [2024-11-20 15:52:56.791675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ff0050 00:04:58.601 [2024-11-20 15:52:56.791683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.601 [2024-11-20 15:52:56.793298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.601 [2024-11-20 15:52:56.793338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.601 Passthru0 00:04:58.601 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.601 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.601 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.601 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.601 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.601 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.601 { 00:04:58.601 "name": "Malloc0", 00:04:58.601 "aliases": [ 00:04:58.601 "ed44f1ef-f0f1-4996-8506-2bcc199a1812" 00:04:58.601 ], 00:04:58.601 "product_name": "Malloc disk", 00:04:58.601 "block_size": 512, 00:04:58.601 "num_blocks": 16384, 00:04:58.601 "uuid": "ed44f1ef-f0f1-4996-8506-2bcc199a1812", 00:04:58.601 "assigned_rate_limits": { 00:04:58.601 "rw_ios_per_sec": 0, 00:04:58.601 "rw_mbytes_per_sec": 0, 00:04:58.601 "r_mbytes_per_sec": 0, 00:04:58.601 "w_mbytes_per_sec": 0 00:04:58.601 }, 00:04:58.601 "claimed": true, 00:04:58.601 "claim_type": "exclusive_write", 00:04:58.601 "zoned": false, 00:04:58.601 "supported_io_types": { 00:04:58.601 "read": true, 00:04:58.601 "write": true, 00:04:58.601 "unmap": true, 00:04:58.601 "flush": true, 00:04:58.601 "reset": true, 00:04:58.601 "nvme_admin": false, 00:04:58.601 "nvme_io": false, 00:04:58.601 "nvme_io_md": false, 00:04:58.601 "write_zeroes": true, 00:04:58.601 "zcopy": true, 00:04:58.601 "get_zone_info": false, 00:04:58.601 "zone_management": false, 00:04:58.601 "zone_append": false, 00:04:58.601 "compare": false, 00:04:58.601 "compare_and_write": false, 00:04:58.601 "abort": true, 00:04:58.601 "seek_hole": false, 00:04:58.601 "seek_data": false, 00:04:58.601 "copy": true, 00:04:58.601 "nvme_iov_md": false 00:04:58.601 }, 00:04:58.601 "memory_domains": [ 00:04:58.601 { 00:04:58.601 "dma_device_id": "system", 00:04:58.601 "dma_device_type": 1 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.601 "dma_device_type": 2 00:04:58.601 } 00:04:58.601 ], 00:04:58.601 "driver_specific": {} 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "name": "Passthru0", 00:04:58.601 "aliases": [ 00:04:58.601 "78779ab3-8533-5e78-b0d1-ff1659de063f" 00:04:58.601 ], 00:04:58.601 "product_name": "passthru", 00:04:58.601 "block_size": 512, 00:04:58.601 "num_blocks": 16384, 00:04:58.601 "uuid": "78779ab3-8533-5e78-b0d1-ff1659de063f", 00:04:58.601 "assigned_rate_limits": { 00:04:58.601 "rw_ios_per_sec": 0, 00:04:58.601 "rw_mbytes_per_sec": 0, 00:04:58.601 "r_mbytes_per_sec": 0, 00:04:58.601 "w_mbytes_per_sec": 0 00:04:58.601 }, 00:04:58.601 "claimed": false, 00:04:58.601 "zoned": false, 00:04:58.601 "supported_io_types": { 00:04:58.601 "read": true, 00:04:58.601 "write": true, 00:04:58.601 "unmap": true, 00:04:58.601 "flush": true, 00:04:58.601 "reset": true, 00:04:58.601 "nvme_admin": false, 00:04:58.601 "nvme_io": false, 00:04:58.601 "nvme_io_md": false, 00:04:58.601 "write_zeroes": true, 00:04:58.601 "zcopy": true, 00:04:58.601 "get_zone_info": false, 00:04:58.601 "zone_management": false, 00:04:58.601 "zone_append": false, 00:04:58.601 "compare": false, 00:04:58.601 "compare_and_write": false, 00:04:58.601 "abort": true, 00:04:58.601 "seek_hole": false, 00:04:58.601 "seek_data": false, 00:04:58.601 "copy": true, 00:04:58.601 "nvme_iov_md": false 00:04:58.601 }, 00:04:58.601 "memory_domains": [ 00:04:58.601 { 00:04:58.601 "dma_device_id": "system", 00:04:58.601 "dma_device_type": 1 00:04:58.601 }, 00:04:58.601 { 00:04:58.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.601 "dma_device_type": 2 00:04:58.601 } 00:04:58.601 ], 00:04:58.601 "driver_specific": { 00:04:58.601 "passthru": { 00:04:58.601 "name": "Passthru0", 00:04:58.601 "base_bdev_name": "Malloc0" 00:04:58.601 } 00:04:58.601 } 00:04:58.601 } 00:04:58.601 ]' 00:04:58.601 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.860 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.860 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.860 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.860 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.860 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.860 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:58.860 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.860 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.860 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.860 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.860 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.860 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.860 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.860 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.860 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.860 ************************************ 00:04:58.860 END TEST rpc_integrity 00:04:58.860 ************************************ 00:04:58.860 15:52:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.860 00:04:58.860 real 0m0.328s 00:04:58.860 user 0m0.215s 00:04:58.860 sys 0m0.044s 00:04:58.860 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.860 15:52:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.860 15:52:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:58.860 15:52:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.860 15:52:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.860 15:52:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.860 ************************************ 00:04:58.860 START TEST rpc_plugins 00:04:58.860 ************************************ 00:04:58.860 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:58.860 15:52:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:58.860 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.860 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.860 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.860 15:52:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:58.860 15:52:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:58.860 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.860 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.860 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.860 15:52:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:58.860 { 00:04:58.860 "name": "Malloc1", 00:04:58.860 "aliases": [ 00:04:58.860 "2ebe38d5-af89-4efd-920e-df3f855bb1fb" 00:04:58.860 ], 00:04:58.860 "product_name": "Malloc disk", 00:04:58.860 "block_size": 4096, 00:04:58.860 "num_blocks": 256, 00:04:58.860 "uuid": "2ebe38d5-af89-4efd-920e-df3f855bb1fb", 00:04:58.860 "assigned_rate_limits": { 00:04:58.860 "rw_ios_per_sec": 0, 00:04:58.860 "rw_mbytes_per_sec": 0, 00:04:58.860 "r_mbytes_per_sec": 0, 00:04:58.860 "w_mbytes_per_sec": 0 00:04:58.860 }, 00:04:58.860 "claimed": false, 00:04:58.860 "zoned": false, 00:04:58.860 "supported_io_types": { 00:04:58.860 "read": true, 00:04:58.860 "write": true, 00:04:58.860 "unmap": true, 00:04:58.860 "flush": true, 00:04:58.860 "reset": true, 00:04:58.860 "nvme_admin": false, 00:04:58.860 "nvme_io": false, 00:04:58.860 "nvme_io_md": false, 00:04:58.860 "write_zeroes": true, 00:04:58.860 "zcopy": true, 00:04:58.860 "get_zone_info": false, 00:04:58.860 "zone_management": false, 00:04:58.860 "zone_append": false, 00:04:58.860 "compare": false, 00:04:58.860 "compare_and_write": false, 00:04:58.860 "abort": true, 00:04:58.860 "seek_hole": false, 00:04:58.860 "seek_data": false, 00:04:58.860 "copy": true, 00:04:58.860 "nvme_iov_md": false 00:04:58.860 }, 00:04:58.860 "memory_domains": [ 00:04:58.860 { 00:04:58.860 "dma_device_id": "system", 00:04:58.860 "dma_device_type": 1 00:04:58.860 }, 00:04:58.860 { 00:04:58.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.860 "dma_device_type": 2 00:04:58.860 } 00:04:58.860 ], 00:04:58.860 "driver_specific": {} 00:04:58.860 } 00:04:58.861 ]' 00:04:58.861 15:52:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:58.861 15:52:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:58.861 15:52:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:58.861 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.861 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.120 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.120 15:52:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:59.120 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.120 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.120 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.120 15:52:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:59.120 15:52:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:59.120 ************************************ 00:04:59.120 END TEST rpc_plugins 00:04:59.120 ************************************ 00:04:59.120 15:52:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:59.120 00:04:59.120 real 0m0.171s 00:04:59.120 user 0m0.115s 00:04:59.120 sys 0m0.019s 00:04:59.120 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.120 15:52:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.120 15:52:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:59.120 15:52:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.120 15:52:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.120 15:52:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.120 ************************************ 00:04:59.120 START TEST rpc_trace_cmd_test 00:04:59.120 ************************************ 00:04:59.120 15:52:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:59.120 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:59.120 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:59.120 15:52:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.120 15:52:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.120 15:52:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.120 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:59.120 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56958", 00:04:59.120 "tpoint_group_mask": "0x8", 00:04:59.120 "iscsi_conn": { 00:04:59.120 "mask": "0x2", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "scsi": { 00:04:59.120 "mask": "0x4", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "bdev": { 00:04:59.120 "mask": "0x8", 00:04:59.120 "tpoint_mask": "0xffffffffffffffff" 00:04:59.120 }, 00:04:59.120 "nvmf_rdma": { 00:04:59.120 "mask": "0x10", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "nvmf_tcp": { 00:04:59.120 "mask": "0x20", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "ftl": { 00:04:59.120 "mask": "0x40", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "blobfs": { 00:04:59.120 "mask": "0x80", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "dsa": { 00:04:59.120 "mask": "0x200", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "thread": { 00:04:59.120 "mask": "0x400", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "nvme_pcie": { 00:04:59.120 "mask": "0x800", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "iaa": { 00:04:59.120 "mask": "0x1000", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "nvme_tcp": { 00:04:59.120 "mask": "0x2000", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "bdev_nvme": { 00:04:59.120 "mask": "0x4000", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "sock": { 00:04:59.120 "mask": "0x8000", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "blob": { 00:04:59.120 "mask": "0x10000", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "bdev_raid": { 00:04:59.120 "mask": "0x20000", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 }, 00:04:59.120 "scheduler": { 00:04:59.120 "mask": "0x40000", 00:04:59.120 "tpoint_mask": "0x0" 00:04:59.120 } 00:04:59.120 }' 00:04:59.120 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:59.120 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:59.120 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:59.120 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:59.120 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:59.378 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:59.378 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:59.378 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:59.378 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:59.378 ************************************ 00:04:59.378 END TEST rpc_trace_cmd_test 00:04:59.378 ************************************ 00:04:59.378 15:52:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:59.378 00:04:59.378 real 0m0.285s 00:04:59.378 user 0m0.240s 00:04:59.378 sys 0m0.033s 00:04:59.378 15:52:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.378 15:52:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.378 15:52:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:59.378 15:52:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:59.379 15:52:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:59.379 15:52:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.379 15:52:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.379 15:52:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.379 ************************************ 00:04:59.379 START TEST rpc_daemon_integrity 00:04:59.379 ************************************ 00:04:59.379 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:59.379 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.379 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.379 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.379 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.379 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.379 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.640 { 00:04:59.640 "name": "Malloc2", 00:04:59.640 "aliases": [ 00:04:59.640 "d8af29e5-f041-42ba-acda-337281fe5605" 00:04:59.640 ], 00:04:59.640 "product_name": "Malloc disk", 00:04:59.640 "block_size": 512, 00:04:59.640 "num_blocks": 16384, 00:04:59.640 "uuid": "d8af29e5-f041-42ba-acda-337281fe5605", 00:04:59.640 "assigned_rate_limits": { 00:04:59.640 "rw_ios_per_sec": 0, 00:04:59.640 "rw_mbytes_per_sec": 0, 00:04:59.640 "r_mbytes_per_sec": 0, 00:04:59.640 "w_mbytes_per_sec": 0 00:04:59.640 }, 00:04:59.640 "claimed": false, 00:04:59.640 "zoned": false, 00:04:59.640 "supported_io_types": { 00:04:59.640 "read": true, 00:04:59.640 "write": true, 00:04:59.640 "unmap": true, 00:04:59.640 "flush": true, 00:04:59.640 "reset": true, 00:04:59.640 "nvme_admin": false, 00:04:59.640 "nvme_io": false, 00:04:59.640 "nvme_io_md": false, 00:04:59.640 "write_zeroes": true, 00:04:59.640 "zcopy": true, 00:04:59.640 "get_zone_info": false, 00:04:59.640 "zone_management": false, 00:04:59.640 "zone_append": false, 00:04:59.640 "compare": false, 00:04:59.640 "compare_and_write": false, 00:04:59.640 "abort": true, 00:04:59.640 "seek_hole": false, 00:04:59.640 "seek_data": false, 00:04:59.640 "copy": true, 00:04:59.640 "nvme_iov_md": false 00:04:59.640 }, 00:04:59.640 "memory_domains": [ 00:04:59.640 { 00:04:59.640 "dma_device_id": "system", 00:04:59.640 "dma_device_type": 1 00:04:59.640 }, 00:04:59.640 { 00:04:59.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.640 "dma_device_type": 2 00:04:59.640 } 00:04:59.640 ], 00:04:59.640 "driver_specific": {} 00:04:59.640 } 00:04:59.640 ]' 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.640 [2024-11-20 15:52:57.728805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:59.640 [2024-11-20 15:52:57.729042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.640 [2024-11-20 15:52:57.729070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ffb030 00:04:59.640 [2024-11-20 15:52:57.729081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.640 [2024-11-20 15:52:57.730649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.640 [2024-11-20 15:52:57.730696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.640 Passthru0 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.640 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.640 { 00:04:59.640 "name": "Malloc2", 00:04:59.640 "aliases": [ 00:04:59.640 "d8af29e5-f041-42ba-acda-337281fe5605" 00:04:59.640 ], 00:04:59.640 "product_name": "Malloc disk", 00:04:59.640 "block_size": 512, 00:04:59.640 "num_blocks": 16384, 00:04:59.640 "uuid": "d8af29e5-f041-42ba-acda-337281fe5605", 00:04:59.640 "assigned_rate_limits": { 00:04:59.640 "rw_ios_per_sec": 0, 00:04:59.640 "rw_mbytes_per_sec": 0, 00:04:59.640 "r_mbytes_per_sec": 0, 00:04:59.640 "w_mbytes_per_sec": 0 00:04:59.640 }, 00:04:59.640 "claimed": true, 00:04:59.640 "claim_type": "exclusive_write", 00:04:59.640 "zoned": false, 00:04:59.640 "supported_io_types": { 00:04:59.640 "read": true, 00:04:59.640 "write": true, 00:04:59.640 "unmap": true, 00:04:59.640 "flush": true, 00:04:59.640 "reset": true, 00:04:59.640 "nvme_admin": false, 00:04:59.640 "nvme_io": false, 00:04:59.640 "nvme_io_md": false, 00:04:59.640 "write_zeroes": true, 00:04:59.640 "zcopy": true, 00:04:59.640 "get_zone_info": false, 00:04:59.641 "zone_management": false, 00:04:59.641 "zone_append": false, 00:04:59.641 "compare": false, 00:04:59.641 "compare_and_write": false, 00:04:59.641 "abort": true, 00:04:59.641 "seek_hole": false, 00:04:59.641 "seek_data": false, 00:04:59.641 "copy": true, 00:04:59.641 "nvme_iov_md": false 00:04:59.641 }, 00:04:59.641 "memory_domains": [ 00:04:59.641 { 00:04:59.641 "dma_device_id": "system", 00:04:59.641 "dma_device_type": 1 00:04:59.641 }, 00:04:59.641 { 00:04:59.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.641 "dma_device_type": 2 00:04:59.641 } 00:04:59.641 ], 00:04:59.641 "driver_specific": {} 00:04:59.641 }, 00:04:59.641 { 00:04:59.641 "name": "Passthru0", 00:04:59.641 "aliases": [ 00:04:59.641 "1af15f89-7c1e-5970-8e40-de70fa6968db" 00:04:59.641 ], 00:04:59.641 "product_name": "passthru", 00:04:59.641 "block_size": 512, 00:04:59.641 "num_blocks": 16384, 00:04:59.641 "uuid": "1af15f89-7c1e-5970-8e40-de70fa6968db", 00:04:59.641 "assigned_rate_limits": { 00:04:59.641 "rw_ios_per_sec": 0, 00:04:59.641 "rw_mbytes_per_sec": 0, 00:04:59.641 "r_mbytes_per_sec": 0, 00:04:59.641 "w_mbytes_per_sec": 0 00:04:59.641 }, 00:04:59.641 "claimed": false, 00:04:59.641 "zoned": false, 00:04:59.641 "supported_io_types": { 00:04:59.641 "read": true, 00:04:59.641 "write": true, 00:04:59.641 "unmap": true, 00:04:59.641 "flush": true, 00:04:59.641 "reset": true, 00:04:59.641 "nvme_admin": false, 00:04:59.641 "nvme_io": false, 00:04:59.641 "nvme_io_md": false, 00:04:59.641 "write_zeroes": true, 00:04:59.641 "zcopy": true, 00:04:59.641 "get_zone_info": false, 00:04:59.641 "zone_management": false, 00:04:59.641 "zone_append": false, 00:04:59.641 "compare": false, 00:04:59.641 "compare_and_write": false, 00:04:59.641 "abort": true, 00:04:59.641 "seek_hole": false, 00:04:59.641 "seek_data": false, 00:04:59.641 "copy": true, 00:04:59.641 "nvme_iov_md": false 00:04:59.641 }, 00:04:59.641 "memory_domains": [ 00:04:59.641 { 00:04:59.641 "dma_device_id": "system", 00:04:59.641 "dma_device_type": 1 00:04:59.641 }, 00:04:59.641 { 00:04:59.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.641 "dma_device_type": 2 00:04:59.641 } 00:04:59.641 ], 00:04:59.641 "driver_specific": { 00:04:59.641 "passthru": { 00:04:59.641 "name": "Passthru0", 00:04:59.641 "base_bdev_name": "Malloc2" 00:04:59.641 } 00:04:59.641 } 00:04:59.641 } 00:04:59.641 ]' 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.641 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.899 ************************************ 00:04:59.899 END TEST rpc_daemon_integrity 00:04:59.899 ************************************ 00:04:59.899 15:52:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.899 00:04:59.899 real 0m0.317s 00:04:59.899 user 0m0.199s 00:04:59.899 sys 0m0.051s 00:04:59.899 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.899 15:52:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.899 15:52:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:59.899 15:52:57 rpc -- rpc/rpc.sh@84 -- # killprocess 56958 00:04:59.899 15:52:57 rpc -- common/autotest_common.sh@954 -- # '[' -z 56958 ']' 00:04:59.899 15:52:57 rpc -- common/autotest_common.sh@958 -- # kill -0 56958 00:04:59.899 15:52:57 rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.899 15:52:57 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.899 15:52:57 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56958 00:04:59.899 killing process with pid 56958 00:04:59.899 15:52:57 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.899 15:52:57 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.899 15:52:57 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56958' 00:04:59.899 15:52:57 rpc -- common/autotest_common.sh@973 -- # kill 56958 00:04:59.899 15:52:57 rpc -- common/autotest_common.sh@978 -- # wait 56958 00:05:00.158 00:05:00.158 real 0m2.966s 00:05:00.158 user 0m3.809s 00:05:00.158 sys 0m0.739s 00:05:00.158 15:52:58 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.158 15:52:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.158 ************************************ 00:05:00.158 END TEST rpc 00:05:00.158 ************************************ 00:05:00.158 15:52:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:00.158 15:52:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.158 15:52:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.158 15:52:58 -- common/autotest_common.sh@10 -- # set +x 00:05:00.478 ************************************ 00:05:00.478 START TEST skip_rpc 00:05:00.478 ************************************ 00:05:00.478 15:52:58 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:00.478 * Looking for test storage... 00:05:00.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.478 15:52:58 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.478 15:52:58 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.478 15:52:58 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.478 15:52:58 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.478 15:52:58 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.478 15:52:58 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.478 15:52:58 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.478 15:52:58 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.478 15:52:58 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.478 15:52:58 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.478 15:52:58 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.479 15:52:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:00.479 15:52:58 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.479 15:52:58 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.479 --rc genhtml_branch_coverage=1 00:05:00.479 --rc genhtml_function_coverage=1 00:05:00.479 --rc genhtml_legend=1 00:05:00.479 --rc geninfo_all_blocks=1 00:05:00.479 --rc geninfo_unexecuted_blocks=1 00:05:00.479 00:05:00.479 ' 00:05:00.479 15:52:58 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.479 --rc genhtml_branch_coverage=1 00:05:00.479 --rc genhtml_function_coverage=1 00:05:00.479 --rc genhtml_legend=1 00:05:00.479 --rc geninfo_all_blocks=1 00:05:00.479 --rc geninfo_unexecuted_blocks=1 00:05:00.479 00:05:00.479 ' 00:05:00.479 15:52:58 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.479 --rc genhtml_branch_coverage=1 00:05:00.479 --rc genhtml_function_coverage=1 00:05:00.479 --rc genhtml_legend=1 00:05:00.479 --rc geninfo_all_blocks=1 00:05:00.479 --rc geninfo_unexecuted_blocks=1 00:05:00.479 00:05:00.479 ' 00:05:00.479 15:52:58 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.479 --rc genhtml_branch_coverage=1 00:05:00.479 --rc genhtml_function_coverage=1 00:05:00.479 --rc genhtml_legend=1 00:05:00.479 --rc geninfo_all_blocks=1 00:05:00.479 --rc geninfo_unexecuted_blocks=1 00:05:00.479 00:05:00.479 ' 00:05:00.479 15:52:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.479 15:52:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.479 15:52:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:00.479 15:52:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.479 15:52:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.479 15:52:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.479 ************************************ 00:05:00.479 START TEST skip_rpc 00:05:00.479 ************************************ 00:05:00.479 15:52:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:00.479 15:52:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57164 00:05:00.479 15:52:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.479 15:52:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:00.479 15:52:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:00.479 [2024-11-20 15:52:58.656040] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:00.479 [2024-11-20 15:52:58.656336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57164 ] 00:05:00.739 [2024-11-20 15:52:58.809253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.739 [2024-11-20 15:52:58.882228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.739 [2024-11-20 15:52:58.965638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57164 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57164 ']' 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57164 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57164 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.004 killing process with pid 57164 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57164' 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57164 00:05:06.004 15:53:03 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57164 00:05:06.004 ************************************ 00:05:06.004 END TEST skip_rpc 00:05:06.004 ************************************ 00:05:06.004 00:05:06.004 real 0m5.446s 00:05:06.004 user 0m5.064s 00:05:06.004 sys 0m0.294s 00:05:06.004 15:53:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.004 15:53:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.005 15:53:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:06.005 15:53:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.005 15:53:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.005 15:53:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.005 ************************************ 00:05:06.005 START TEST skip_rpc_with_json 00:05:06.005 ************************************ 00:05:06.005 15:53:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:06.005 15:53:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:06.005 15:53:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57250 00:05:06.005 15:53:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.005 15:53:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57250 00:05:06.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.005 15:53:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57250 ']' 00:05:06.005 15:53:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.005 15:53:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.005 15:53:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.005 15:53:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.005 15:53:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.005 15:53:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.005 [2024-11-20 15:53:04.152590] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:06.005 [2024-11-20 15:53:04.153064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57250 ] 00:05:06.263 [2024-11-20 15:53:04.298781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.263 [2024-11-20 15:53:04.364492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.263 [2024-11-20 15:53:04.439786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.198 [2024-11-20 15:53:05.183505] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:07.198 request: 00:05:07.198 { 00:05:07.198 "trtype": "tcp", 00:05:07.198 "method": "nvmf_get_transports", 00:05:07.198 "req_id": 1 00:05:07.198 } 00:05:07.198 Got JSON-RPC error response 00:05:07.198 response: 00:05:07.198 { 00:05:07.198 "code": -19, 00:05:07.198 "message": "No such device" 00:05:07.198 } 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.198 [2024-11-20 15:53:05.195621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.198 15:53:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:07.198 { 00:05:07.198 "subsystems": [ 00:05:07.198 { 00:05:07.198 "subsystem": "fsdev", 00:05:07.198 "config": [ 00:05:07.198 { 00:05:07.198 "method": "fsdev_set_opts", 00:05:07.198 "params": { 00:05:07.198 "fsdev_io_pool_size": 65535, 00:05:07.198 "fsdev_io_cache_size": 256 00:05:07.198 } 00:05:07.198 } 00:05:07.198 ] 00:05:07.198 }, 00:05:07.198 { 00:05:07.198 "subsystem": "keyring", 00:05:07.198 "config": [] 00:05:07.198 }, 00:05:07.198 { 00:05:07.198 "subsystem": "iobuf", 00:05:07.198 "config": [ 00:05:07.198 { 00:05:07.198 "method": "iobuf_set_options", 00:05:07.198 "params": { 00:05:07.198 "small_pool_count": 8192, 00:05:07.198 "large_pool_count": 1024, 00:05:07.198 "small_bufsize": 8192, 00:05:07.198 "large_bufsize": 135168, 00:05:07.198 "enable_numa": false 00:05:07.198 } 00:05:07.198 } 00:05:07.198 ] 00:05:07.198 }, 00:05:07.198 { 00:05:07.198 "subsystem": "sock", 00:05:07.198 "config": [ 00:05:07.198 { 00:05:07.198 "method": "sock_set_default_impl", 00:05:07.198 "params": { 00:05:07.198 "impl_name": "uring" 00:05:07.198 } 00:05:07.198 }, 00:05:07.198 { 00:05:07.198 "method": "sock_impl_set_options", 00:05:07.198 "params": { 00:05:07.198 "impl_name": "ssl", 00:05:07.198 "recv_buf_size": 4096, 00:05:07.198 "send_buf_size": 4096, 00:05:07.198 "enable_recv_pipe": true, 00:05:07.198 "enable_quickack": false, 00:05:07.198 "enable_placement_id": 0, 00:05:07.198 "enable_zerocopy_send_server": true, 00:05:07.198 "enable_zerocopy_send_client": false, 00:05:07.198 "zerocopy_threshold": 0, 00:05:07.198 "tls_version": 0, 00:05:07.198 "enable_ktls": false 00:05:07.198 } 00:05:07.198 }, 00:05:07.198 { 00:05:07.198 "method": "sock_impl_set_options", 00:05:07.198 "params": { 00:05:07.198 "impl_name": "posix", 00:05:07.198 "recv_buf_size": 2097152, 00:05:07.198 "send_buf_size": 2097152, 00:05:07.198 "enable_recv_pipe": true, 00:05:07.198 "enable_quickack": false, 00:05:07.198 "enable_placement_id": 0, 00:05:07.198 "enable_zerocopy_send_server": true, 00:05:07.198 "enable_zerocopy_send_client": false, 00:05:07.198 "zerocopy_threshold": 0, 00:05:07.198 "tls_version": 0, 00:05:07.198 "enable_ktls": false 00:05:07.198 } 00:05:07.198 }, 00:05:07.198 { 00:05:07.198 "method": "sock_impl_set_options", 00:05:07.198 "params": { 00:05:07.198 "impl_name": "uring", 00:05:07.198 "recv_buf_size": 2097152, 00:05:07.198 "send_buf_size": 2097152, 00:05:07.198 "enable_recv_pipe": true, 00:05:07.198 "enable_quickack": false, 00:05:07.198 "enable_placement_id": 0, 00:05:07.198 "enable_zerocopy_send_server": false, 00:05:07.198 "enable_zerocopy_send_client": false, 00:05:07.198 "zerocopy_threshold": 0, 00:05:07.198 "tls_version": 0, 00:05:07.198 "enable_ktls": false 00:05:07.198 } 00:05:07.198 } 00:05:07.198 ] 00:05:07.198 }, 00:05:07.198 { 00:05:07.198 "subsystem": "vmd", 00:05:07.198 "config": [] 00:05:07.198 }, 00:05:07.198 { 00:05:07.198 "subsystem": "accel", 00:05:07.198 "config": [ 00:05:07.198 { 00:05:07.198 "method": "accel_set_options", 00:05:07.198 "params": { 00:05:07.198 "small_cache_size": 128, 00:05:07.198 "large_cache_size": 16, 00:05:07.198 "task_count": 2048, 00:05:07.198 "sequence_count": 2048, 00:05:07.198 "buf_count": 2048 00:05:07.198 } 00:05:07.198 } 00:05:07.198 ] 00:05:07.198 }, 00:05:07.198 { 00:05:07.198 "subsystem": "bdev", 00:05:07.198 "config": [ 00:05:07.198 { 00:05:07.198 "method": "bdev_set_options", 00:05:07.198 "params": { 00:05:07.198 "bdev_io_pool_size": 65535, 00:05:07.198 "bdev_io_cache_size": 256, 00:05:07.198 "bdev_auto_examine": true, 00:05:07.198 "iobuf_small_cache_size": 128, 00:05:07.198 "iobuf_large_cache_size": 16 00:05:07.198 } 00:05:07.198 }, 00:05:07.198 { 00:05:07.198 "method": "bdev_raid_set_options", 00:05:07.198 "params": { 00:05:07.198 "process_window_size_kb": 1024, 00:05:07.198 "process_max_bandwidth_mb_sec": 0 00:05:07.198 } 00:05:07.198 }, 00:05:07.198 { 00:05:07.198 "method": "bdev_iscsi_set_options", 00:05:07.198 "params": { 00:05:07.198 "timeout_sec": 30 00:05:07.198 } 00:05:07.198 }, 00:05:07.198 { 00:05:07.198 "method": "bdev_nvme_set_options", 00:05:07.198 "params": { 00:05:07.198 "action_on_timeout": "none", 00:05:07.198 "timeout_us": 0, 00:05:07.198 "timeout_admin_us": 0, 00:05:07.198 "keep_alive_timeout_ms": 10000, 00:05:07.198 "arbitration_burst": 0, 00:05:07.198 "low_priority_weight": 0, 00:05:07.198 "medium_priority_weight": 0, 00:05:07.198 "high_priority_weight": 0, 00:05:07.198 "nvme_adminq_poll_period_us": 10000, 00:05:07.198 "nvme_ioq_poll_period_us": 0, 00:05:07.198 "io_queue_requests": 0, 00:05:07.198 "delay_cmd_submit": true, 00:05:07.198 "transport_retry_count": 4, 00:05:07.198 "bdev_retry_count": 3, 00:05:07.198 "transport_ack_timeout": 0, 00:05:07.198 "ctrlr_loss_timeout_sec": 0, 00:05:07.198 "reconnect_delay_sec": 0, 00:05:07.198 "fast_io_fail_timeout_sec": 0, 00:05:07.198 "disable_auto_failback": false, 00:05:07.198 "generate_uuids": false, 00:05:07.198 "transport_tos": 0, 00:05:07.198 "nvme_error_stat": false, 00:05:07.198 "rdma_srq_size": 0, 00:05:07.198 "io_path_stat": false, 00:05:07.198 "allow_accel_sequence": false, 00:05:07.198 "rdma_max_cq_size": 0, 00:05:07.198 "rdma_cm_event_timeout_ms": 0, 00:05:07.198 "dhchap_digests": [ 00:05:07.198 "sha256", 00:05:07.198 "sha384", 00:05:07.198 "sha512" 00:05:07.198 ], 00:05:07.199 "dhchap_dhgroups": [ 00:05:07.199 "null", 00:05:07.199 "ffdhe2048", 00:05:07.199 "ffdhe3072", 00:05:07.199 "ffdhe4096", 00:05:07.199 "ffdhe6144", 00:05:07.199 "ffdhe8192" 00:05:07.199 ] 00:05:07.199 } 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "method": "bdev_nvme_set_hotplug", 00:05:07.199 "params": { 00:05:07.199 "period_us": 100000, 00:05:07.199 "enable": false 00:05:07.199 } 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "method": "bdev_wait_for_examine" 00:05:07.199 } 00:05:07.199 ] 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "subsystem": "scsi", 00:05:07.199 "config": null 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "subsystem": "scheduler", 00:05:07.199 "config": [ 00:05:07.199 { 00:05:07.199 "method": "framework_set_scheduler", 00:05:07.199 "params": { 00:05:07.199 "name": "static" 00:05:07.199 } 00:05:07.199 } 00:05:07.199 ] 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "subsystem": "vhost_scsi", 00:05:07.199 "config": [] 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "subsystem": "vhost_blk", 00:05:07.199 "config": [] 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "subsystem": "ublk", 00:05:07.199 "config": [] 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "subsystem": "nbd", 00:05:07.199 "config": [] 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "subsystem": "nvmf", 00:05:07.199 "config": [ 00:05:07.199 { 00:05:07.199 "method": "nvmf_set_config", 00:05:07.199 "params": { 00:05:07.199 "discovery_filter": "match_any", 00:05:07.199 "admin_cmd_passthru": { 00:05:07.199 "identify_ctrlr": false 00:05:07.199 }, 00:05:07.199 "dhchap_digests": [ 00:05:07.199 "sha256", 00:05:07.199 "sha384", 00:05:07.199 "sha512" 00:05:07.199 ], 00:05:07.199 "dhchap_dhgroups": [ 00:05:07.199 "null", 00:05:07.199 "ffdhe2048", 00:05:07.199 "ffdhe3072", 00:05:07.199 "ffdhe4096", 00:05:07.199 "ffdhe6144", 00:05:07.199 "ffdhe8192" 00:05:07.199 ] 00:05:07.199 } 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "method": "nvmf_set_max_subsystems", 00:05:07.199 "params": { 00:05:07.199 "max_subsystems": 1024 00:05:07.199 } 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "method": "nvmf_set_crdt", 00:05:07.199 "params": { 00:05:07.199 "crdt1": 0, 00:05:07.199 "crdt2": 0, 00:05:07.199 "crdt3": 0 00:05:07.199 } 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "method": "nvmf_create_transport", 00:05:07.199 "params": { 00:05:07.199 "trtype": "TCP", 00:05:07.199 "max_queue_depth": 128, 00:05:07.199 "max_io_qpairs_per_ctrlr": 127, 00:05:07.199 "in_capsule_data_size": 4096, 00:05:07.199 "max_io_size": 131072, 00:05:07.199 "io_unit_size": 131072, 00:05:07.199 "max_aq_depth": 128, 00:05:07.199 "num_shared_buffers": 511, 00:05:07.199 "buf_cache_size": 4294967295, 00:05:07.199 "dif_insert_or_strip": false, 00:05:07.199 "zcopy": false, 00:05:07.199 "c2h_success": true, 00:05:07.199 "sock_priority": 0, 00:05:07.199 "abort_timeout_sec": 1, 00:05:07.199 "ack_timeout": 0, 00:05:07.199 "data_wr_pool_size": 0 00:05:07.199 } 00:05:07.199 } 00:05:07.199 ] 00:05:07.199 }, 00:05:07.199 { 00:05:07.199 "subsystem": "iscsi", 00:05:07.199 "config": [ 00:05:07.199 { 00:05:07.199 "method": "iscsi_set_options", 00:05:07.199 "params": { 00:05:07.199 "node_base": "iqn.2016-06.io.spdk", 00:05:07.199 "max_sessions": 128, 00:05:07.199 "max_connections_per_session": 2, 00:05:07.199 "max_queue_depth": 64, 00:05:07.199 "default_time2wait": 2, 00:05:07.199 "default_time2retain": 20, 00:05:07.199 "first_burst_length": 8192, 00:05:07.199 "immediate_data": true, 00:05:07.199 "allow_duplicated_isid": false, 00:05:07.199 "error_recovery_level": 0, 00:05:07.199 "nop_timeout": 60, 00:05:07.199 "nop_in_interval": 30, 00:05:07.199 "disable_chap": false, 00:05:07.199 "require_chap": false, 00:05:07.199 "mutual_chap": false, 00:05:07.199 "chap_group": 0, 00:05:07.199 "max_large_datain_per_connection": 64, 00:05:07.199 "max_r2t_per_connection": 4, 00:05:07.199 "pdu_pool_size": 36864, 00:05:07.199 "immediate_data_pool_size": 16384, 00:05:07.199 "data_out_pool_size": 2048 00:05:07.199 } 00:05:07.199 } 00:05:07.199 ] 00:05:07.199 } 00:05:07.199 ] 00:05:07.199 } 00:05:07.199 15:53:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:07.199 15:53:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57250 00:05:07.199 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57250 ']' 00:05:07.199 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57250 00:05:07.199 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:07.199 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.199 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57250 00:05:07.199 killing process with pid 57250 00:05:07.199 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.199 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.199 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57250' 00:05:07.199 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57250 00:05:07.199 15:53:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57250 00:05:07.765 15:53:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57278 00:05:07.765 15:53:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:07.765 15:53:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:13.065 15:53:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57278 00:05:13.065 15:53:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57278 ']' 00:05:13.065 15:53:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57278 00:05:13.065 15:53:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:13.065 15:53:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.065 15:53:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57278 00:05:13.065 killing process with pid 57278 00:05:13.065 15:53:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.065 15:53:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.065 15:53:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57278' 00:05:13.065 15:53:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57278 00:05:13.065 15:53:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57278 00:05:13.065 15:53:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:13.065 15:53:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:13.065 ************************************ 00:05:13.065 END TEST skip_rpc_with_json 00:05:13.065 ************************************ 00:05:13.065 00:05:13.065 real 0m7.189s 00:05:13.065 user 0m6.944s 00:05:13.065 sys 0m0.701s 00:05:13.065 15:53:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.065 15:53:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:13.065 15:53:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:13.065 15:53:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.065 15:53:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.065 15:53:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.324 ************************************ 00:05:13.324 START TEST skip_rpc_with_delay 00:05:13.324 ************************************ 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:13.324 [2024-11-20 15:53:11.391053] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.324 00:05:13.324 real 0m0.096s 00:05:13.324 user 0m0.060s 00:05:13.324 sys 0m0.034s 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.324 15:53:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:13.324 ************************************ 00:05:13.324 END TEST skip_rpc_with_delay 00:05:13.324 ************************************ 00:05:13.324 15:53:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:13.324 15:53:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:13.324 15:53:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:13.324 15:53:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.324 15:53:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.324 15:53:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.324 ************************************ 00:05:13.324 START TEST exit_on_failed_rpc_init 00:05:13.324 ************************************ 00:05:13.324 15:53:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:13.324 15:53:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57387 00:05:13.324 15:53:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57387 00:05:13.324 15:53:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57387 ']' 00:05:13.324 15:53:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.324 15:53:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.324 15:53:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.324 15:53:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.324 15:53:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.324 15:53:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:13.325 [2024-11-20 15:53:11.552215] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:13.325 [2024-11-20 15:53:11.552331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57387 ] 00:05:13.582 [2024-11-20 15:53:11.709448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.582 [2024-11-20 15:53:11.771515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.840 [2024-11-20 15:53:11.844400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:14.407 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.666 [2024-11-20 15:53:12.674691] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:14.666 [2024-11-20 15:53:12.674864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57411 ] 00:05:14.666 [2024-11-20 15:53:12.827796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.666 [2024-11-20 15:53:12.902279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.666 [2024-11-20 15:53:12.902704] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:14.666 [2024-11-20 15:53:12.902731] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:14.666 [2024-11-20 15:53:12.902743] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57387 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57387 ']' 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57387 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.924 15:53:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57387 00:05:14.924 killing process with pid 57387 00:05:14.924 15:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.924 15:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.924 15:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57387' 00:05:14.924 15:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57387 00:05:14.924 15:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57387 00:05:15.489 ************************************ 00:05:15.489 END TEST exit_on_failed_rpc_init 00:05:15.489 ************************************ 00:05:15.489 00:05:15.489 real 0m1.964s 00:05:15.489 user 0m2.318s 00:05:15.489 sys 0m0.449s 00:05:15.489 15:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.489 15:53:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.489 15:53:13 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:15.489 ************************************ 00:05:15.489 END TEST skip_rpc 00:05:15.489 ************************************ 00:05:15.489 00:05:15.489 real 0m15.072s 00:05:15.489 user 0m14.559s 00:05:15.489 sys 0m1.682s 00:05:15.489 15:53:13 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.489 15:53:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.489 15:53:13 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:15.489 15:53:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.489 15:53:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.489 15:53:13 -- common/autotest_common.sh@10 -- # set +x 00:05:15.489 ************************************ 00:05:15.489 START TEST rpc_client 00:05:15.489 ************************************ 00:05:15.489 15:53:13 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:15.489 * Looking for test storage... 00:05:15.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:15.489 15:53:13 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.489 15:53:13 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.489 15:53:13 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.756 15:53:13 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.756 15:53:13 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:15.756 15:53:13 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.756 15:53:13 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.756 --rc genhtml_branch_coverage=1 00:05:15.756 --rc genhtml_function_coverage=1 00:05:15.756 --rc genhtml_legend=1 00:05:15.756 --rc geninfo_all_blocks=1 00:05:15.756 --rc geninfo_unexecuted_blocks=1 00:05:15.756 00:05:15.756 ' 00:05:15.756 15:53:13 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.756 --rc genhtml_branch_coverage=1 00:05:15.756 --rc genhtml_function_coverage=1 00:05:15.756 --rc genhtml_legend=1 00:05:15.756 --rc geninfo_all_blocks=1 00:05:15.756 --rc geninfo_unexecuted_blocks=1 00:05:15.756 00:05:15.756 ' 00:05:15.756 15:53:13 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.756 --rc genhtml_branch_coverage=1 00:05:15.756 --rc genhtml_function_coverage=1 00:05:15.756 --rc genhtml_legend=1 00:05:15.756 --rc geninfo_all_blocks=1 00:05:15.756 --rc geninfo_unexecuted_blocks=1 00:05:15.756 00:05:15.756 ' 00:05:15.756 15:53:13 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.756 --rc genhtml_branch_coverage=1 00:05:15.756 --rc genhtml_function_coverage=1 00:05:15.756 --rc genhtml_legend=1 00:05:15.756 --rc geninfo_all_blocks=1 00:05:15.756 --rc geninfo_unexecuted_blocks=1 00:05:15.756 00:05:15.756 ' 00:05:15.756 15:53:13 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:15.756 OK 00:05:15.756 15:53:13 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:15.756 00:05:15.756 real 0m0.251s 00:05:15.756 user 0m0.159s 00:05:15.756 sys 0m0.098s 00:05:15.756 ************************************ 00:05:15.756 END TEST rpc_client 00:05:15.756 ************************************ 00:05:15.756 15:53:13 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.756 15:53:13 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:15.756 15:53:13 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:15.756 15:53:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.756 15:53:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.756 15:53:13 -- common/autotest_common.sh@10 -- # set +x 00:05:15.756 ************************************ 00:05:15.756 START TEST json_config 00:05:15.756 ************************************ 00:05:15.756 15:53:13 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:15.756 15:53:13 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.756 15:53:13 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.756 15:53:13 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.756 15:53:13 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.756 15:53:13 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.756 15:53:13 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.756 15:53:13 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.756 15:53:13 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.756 15:53:13 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.756 15:53:13 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.756 15:53:13 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.756 15:53:13 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.756 15:53:13 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.756 15:53:13 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.756 15:53:13 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.756 15:53:13 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:15.756 15:53:13 json_config -- scripts/common.sh@345 -- # : 1 00:05:15.756 15:53:13 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.756 15:53:13 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.756 15:53:13 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:15.756 15:53:13 json_config -- scripts/common.sh@353 -- # local d=1 00:05:15.756 15:53:13 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.756 15:53:13 json_config -- scripts/common.sh@355 -- # echo 1 00:05:15.756 15:53:13 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.756 15:53:13 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:15.756 15:53:13 json_config -- scripts/common.sh@353 -- # local d=2 00:05:15.756 15:53:13 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.756 15:53:13 json_config -- scripts/common.sh@355 -- # echo 2 00:05:15.756 15:53:13 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.756 15:53:13 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.756 15:53:13 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.756 15:53:13 json_config -- scripts/common.sh@368 -- # return 0 00:05:15.756 15:53:13 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.756 15:53:13 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.756 --rc genhtml_branch_coverage=1 00:05:15.756 --rc genhtml_function_coverage=1 00:05:15.756 --rc genhtml_legend=1 00:05:15.756 --rc geninfo_all_blocks=1 00:05:15.756 --rc geninfo_unexecuted_blocks=1 00:05:15.756 00:05:15.756 ' 00:05:15.756 15:53:13 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.756 --rc genhtml_branch_coverage=1 00:05:15.756 --rc genhtml_function_coverage=1 00:05:15.756 --rc genhtml_legend=1 00:05:15.756 --rc geninfo_all_blocks=1 00:05:15.756 --rc geninfo_unexecuted_blocks=1 00:05:15.756 00:05:15.756 ' 00:05:15.756 15:53:13 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.756 --rc genhtml_branch_coverage=1 00:05:15.757 --rc genhtml_function_coverage=1 00:05:15.757 --rc genhtml_legend=1 00:05:15.757 --rc geninfo_all_blocks=1 00:05:15.757 --rc geninfo_unexecuted_blocks=1 00:05:15.757 00:05:15.757 ' 00:05:15.757 15:53:13 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.757 --rc genhtml_branch_coverage=1 00:05:15.757 --rc genhtml_function_coverage=1 00:05:15.757 --rc genhtml_legend=1 00:05:15.757 --rc geninfo_all_blocks=1 00:05:15.757 --rc geninfo_unexecuted_blocks=1 00:05:15.757 00:05:15.757 ' 00:05:15.757 15:53:13 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:15.757 15:53:13 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:15.757 15:53:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.757 15:53:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.757 15:53:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.757 15:53:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.757 15:53:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.757 15:53:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.757 15:53:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.757 15:53:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.757 15:53:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.757 15:53:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:16.014 15:53:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:16.014 15:53:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.014 15:53:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.014 15:53:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.014 15:53:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.014 15:53:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.014 15:53:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.014 15:53:14 json_config -- paths/export.sh@5 -- # export PATH 00:05:16.014 15:53:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@51 -- # : 0 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:16.014 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:16.014 15:53:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:16.014 INFO: JSON configuration test init 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:16.014 15:53:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.014 15:53:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:16.014 15:53:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.014 15:53:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.014 15:53:14 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:16.014 15:53:14 json_config -- json_config/common.sh@9 -- # local app=target 00:05:16.014 15:53:14 json_config -- json_config/common.sh@10 -- # shift 00:05:16.014 15:53:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:16.014 15:53:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:16.014 15:53:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:16.014 15:53:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.014 15:53:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.014 15:53:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57545 00:05:16.014 15:53:14 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:16.014 15:53:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.014 Waiting for target to run... 00:05:16.014 15:53:14 json_config -- json_config/common.sh@25 -- # waitforlisten 57545 /var/tmp/spdk_tgt.sock 00:05:16.014 15:53:14 json_config -- common/autotest_common.sh@835 -- # '[' -z 57545 ']' 00:05:16.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.014 15:53:14 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.014 15:53:14 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.014 15:53:14 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.014 15:53:14 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.014 15:53:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.014 [2024-11-20 15:53:14.111196] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:16.014 [2024-11-20 15:53:14.111300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57545 ] 00:05:16.577 [2024-11-20 15:53:14.540758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.577 [2024-11-20 15:53:14.590802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.140 00:05:17.140 15:53:15 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.140 15:53:15 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:17.140 15:53:15 json_config -- json_config/common.sh@26 -- # echo '' 00:05:17.140 15:53:15 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:17.140 15:53:15 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:17.140 15:53:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.140 15:53:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.140 15:53:15 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:17.140 15:53:15 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:17.140 15:53:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.140 15:53:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.140 15:53:15 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:17.140 15:53:15 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:17.140 15:53:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:17.397 [2024-11-20 15:53:15.475043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:17.654 15:53:15 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:17.654 15:53:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:17.654 15:53:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.654 15:53:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.654 15:53:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:17.654 15:53:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:17.654 15:53:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:17.654 15:53:15 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:17.654 15:53:15 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:17.654 15:53:15 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:17.654 15:53:15 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:17.654 15:53:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@54 -- # sort 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:17.912 15:53:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.912 15:53:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:17.912 15:53:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.912 15:53:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:17.912 15:53:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:17.912 15:53:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:18.170 MallocForNvmf0 00:05:18.170 15:53:16 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.170 15:53:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:18.428 MallocForNvmf1 00:05:18.428 15:53:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.428 15:53:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:18.685 [2024-11-20 15:53:16.857593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.685 15:53:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.685 15:53:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:18.943 15:53:17 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:18.943 15:53:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.201 15:53:17 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.201 15:53:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:19.504 15:53:17 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.504 15:53:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:19.763 [2024-11-20 15:53:17.902222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:19.763 15:53:17 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:19.763 15:53:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.763 15:53:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.763 15:53:17 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:19.763 15:53:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.763 15:53:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.763 15:53:17 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:19.763 15:53:17 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:19.763 15:53:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.021 MallocBdevForConfigChangeCheck 00:05:20.021 15:53:18 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:20.021 15:53:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.021 15:53:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.279 15:53:18 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:20.279 15:53:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.537 INFO: shutting down applications... 00:05:20.537 15:53:18 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:20.537 15:53:18 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:20.537 15:53:18 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:20.537 15:53:18 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:20.537 15:53:18 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:21.102 Calling clear_iscsi_subsystem 00:05:21.102 Calling clear_nvmf_subsystem 00:05:21.102 Calling clear_nbd_subsystem 00:05:21.102 Calling clear_ublk_subsystem 00:05:21.102 Calling clear_vhost_blk_subsystem 00:05:21.102 Calling clear_vhost_scsi_subsystem 00:05:21.103 Calling clear_bdev_subsystem 00:05:21.103 15:53:19 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:21.103 15:53:19 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:21.103 15:53:19 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:21.103 15:53:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.103 15:53:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:21.103 15:53:19 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:21.360 15:53:19 json_config -- json_config/json_config.sh@352 -- # break 00:05:21.360 15:53:19 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:21.360 15:53:19 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:21.360 15:53:19 json_config -- json_config/common.sh@31 -- # local app=target 00:05:21.360 15:53:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:21.360 15:53:19 json_config -- json_config/common.sh@35 -- # [[ -n 57545 ]] 00:05:21.360 15:53:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57545 00:05:21.360 15:53:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:21.360 15:53:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.360 15:53:19 json_config -- json_config/common.sh@41 -- # kill -0 57545 00:05:21.360 15:53:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.926 15:53:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.926 15:53:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.926 15:53:20 json_config -- json_config/common.sh@41 -- # kill -0 57545 00:05:21.926 15:53:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.926 15:53:20 json_config -- json_config/common.sh@43 -- # break 00:05:21.926 15:53:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.926 SPDK target shutdown done 00:05:21.926 15:53:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.926 INFO: relaunching applications... 00:05:21.926 15:53:20 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:21.926 15:53:20 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:21.926 15:53:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:21.926 15:53:20 json_config -- json_config/common.sh@10 -- # shift 00:05:21.926 15:53:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.926 15:53:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.926 15:53:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.926 15:53:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.926 15:53:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.926 15:53:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57746 00:05:21.926 15:53:20 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:21.926 Waiting for target to run... 00:05:21.926 15:53:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.926 15:53:20 json_config -- json_config/common.sh@25 -- # waitforlisten 57746 /var/tmp/spdk_tgt.sock 00:05:21.926 15:53:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 57746 ']' 00:05:21.926 15:53:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.926 15:53:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.926 15:53:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.926 15:53:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.926 15:53:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.926 [2024-11-20 15:53:20.088012] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:21.926 [2024-11-20 15:53:20.088120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57746 ] 00:05:22.491 [2024-11-20 15:53:20.511217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.491 [2024-11-20 15:53:20.563143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.491 [2024-11-20 15:53:20.701084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.749 [2024-11-20 15:53:20.919646] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.749 [2024-11-20 15:53:20.951748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:23.006 15:53:21 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.006 00:05:23.006 15:53:21 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:23.006 15:53:21 json_config -- json_config/common.sh@26 -- # echo '' 00:05:23.006 INFO: Checking if target configuration is the same... 00:05:23.006 15:53:21 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:23.006 15:53:21 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:23.006 15:53:21 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.006 15:53:21 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:23.006 15:53:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.006 + '[' 2 -ne 2 ']' 00:05:23.006 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:23.006 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:23.006 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:23.006 +++ basename /dev/fd/62 00:05:23.006 ++ mktemp /tmp/62.XXX 00:05:23.006 + tmp_file_1=/tmp/62.PAs 00:05:23.006 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.006 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.006 + tmp_file_2=/tmp/spdk_tgt_config.json.TdX 00:05:23.006 + ret=0 00:05:23.006 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.572 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:23.572 + diff -u /tmp/62.PAs /tmp/spdk_tgt_config.json.TdX 00:05:23.572 INFO: JSON config files are the same 00:05:23.572 + echo 'INFO: JSON config files are the same' 00:05:23.572 + rm /tmp/62.PAs /tmp/spdk_tgt_config.json.TdX 00:05:23.572 + exit 0 00:05:23.572 15:53:21 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:23.572 INFO: changing configuration and checking if this can be detected... 00:05:23.572 15:53:21 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:23.572 15:53:21 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.572 15:53:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.830 15:53:21 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.830 15:53:21 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:23.830 15:53:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.830 + '[' 2 -ne 2 ']' 00:05:23.830 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:23.830 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:23.830 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:23.830 +++ basename /dev/fd/62 00:05:23.831 ++ mktemp /tmp/62.XXX 00:05:23.831 + tmp_file_1=/tmp/62.RJU 00:05:23.831 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:23.831 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.831 + tmp_file_2=/tmp/spdk_tgt_config.json.S8G 00:05:23.831 + ret=0 00:05:23.831 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:24.397 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:24.397 + diff -u /tmp/62.RJU /tmp/spdk_tgt_config.json.S8G 00:05:24.397 + ret=1 00:05:24.397 + echo '=== Start of file: /tmp/62.RJU ===' 00:05:24.397 + cat /tmp/62.RJU 00:05:24.397 + echo '=== End of file: /tmp/62.RJU ===' 00:05:24.397 + echo '' 00:05:24.397 + echo '=== Start of file: /tmp/spdk_tgt_config.json.S8G ===' 00:05:24.397 + cat /tmp/spdk_tgt_config.json.S8G 00:05:24.397 + echo '=== End of file: /tmp/spdk_tgt_config.json.S8G ===' 00:05:24.397 + echo '' 00:05:24.397 + rm /tmp/62.RJU /tmp/spdk_tgt_config.json.S8G 00:05:24.397 + exit 1 00:05:24.397 INFO: configuration change detected. 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@324 -- # [[ -n 57746 ]] 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.397 15:53:22 json_config -- json_config/json_config.sh@330 -- # killprocess 57746 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@954 -- # '[' -z 57746 ']' 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@958 -- # kill -0 57746 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@959 -- # uname 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57746 00:05:24.397 killing process with pid 57746 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57746' 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@973 -- # kill 57746 00:05:24.397 15:53:22 json_config -- common/autotest_common.sh@978 -- # wait 57746 00:05:24.655 15:53:22 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:24.655 15:53:22 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:24.655 15:53:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.655 15:53:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.655 INFO: Success 00:05:24.655 15:53:22 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:24.655 15:53:22 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:24.655 00:05:24.655 real 0m9.068s 00:05:24.655 user 0m13.136s 00:05:24.655 sys 0m1.759s 00:05:24.655 15:53:22 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.655 ************************************ 00:05:24.655 END TEST json_config 00:05:24.655 ************************************ 00:05:24.655 15:53:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.914 15:53:22 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:24.914 15:53:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.914 15:53:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.914 15:53:22 -- common/autotest_common.sh@10 -- # set +x 00:05:24.914 ************************************ 00:05:24.914 START TEST json_config_extra_key 00:05:24.914 ************************************ 00:05:24.914 15:53:22 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:24.914 15:53:23 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.914 15:53:23 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.914 15:53:23 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.914 15:53:23 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:24.914 15:53:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.915 15:53:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:24.915 15:53:23 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.915 15:53:23 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.915 15:53:23 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.915 15:53:23 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:24.915 15:53:23 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.915 15:53:23 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.915 --rc genhtml_branch_coverage=1 00:05:24.915 --rc genhtml_function_coverage=1 00:05:24.915 --rc genhtml_legend=1 00:05:24.915 --rc geninfo_all_blocks=1 00:05:24.915 --rc geninfo_unexecuted_blocks=1 00:05:24.915 00:05:24.915 ' 00:05:24.915 15:53:23 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.915 --rc genhtml_branch_coverage=1 00:05:24.915 --rc genhtml_function_coverage=1 00:05:24.915 --rc genhtml_legend=1 00:05:24.915 --rc geninfo_all_blocks=1 00:05:24.915 --rc geninfo_unexecuted_blocks=1 00:05:24.915 00:05:24.915 ' 00:05:24.915 15:53:23 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.915 --rc genhtml_branch_coverage=1 00:05:24.915 --rc genhtml_function_coverage=1 00:05:24.915 --rc genhtml_legend=1 00:05:24.915 --rc geninfo_all_blocks=1 00:05:24.915 --rc geninfo_unexecuted_blocks=1 00:05:24.915 00:05:24.915 ' 00:05:24.915 15:53:23 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.915 --rc genhtml_branch_coverage=1 00:05:24.915 --rc genhtml_function_coverage=1 00:05:24.915 --rc genhtml_legend=1 00:05:24.915 --rc geninfo_all_blocks=1 00:05:24.915 --rc geninfo_unexecuted_blocks=1 00:05:24.915 00:05:24.915 ' 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.915 15:53:23 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.915 15:53:23 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.915 15:53:23 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.915 15:53:23 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.915 15:53:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.915 15:53:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.915 15:53:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.915 15:53:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:24.915 15:53:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.915 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.915 15:53:23 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:24.915 INFO: launching applications... 00:05:24.915 15:53:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:24.915 15:53:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:24.915 15:53:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:24.915 15:53:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.915 15:53:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.915 15:53:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.915 15:53:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.915 15:53:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.915 15:53:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57900 00:05:24.915 15:53:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.915 Waiting for target to run... 00:05:24.915 15:53:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:24.915 15:53:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57900 /var/tmp/spdk_tgt.sock 00:05:24.915 15:53:23 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57900 ']' 00:05:24.915 15:53:23 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.915 15:53:23 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.915 15:53:23 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.915 15:53:23 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.915 15:53:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:25.174 [2024-11-20 15:53:23.221874] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:25.174 [2024-11-20 15:53:23.221975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57900 ] 00:05:25.432 [2024-11-20 15:53:23.664031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.690 [2024-11-20 15:53:23.723681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.690 [2024-11-20 15:53:23.760013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.256 00:05:26.256 INFO: shutting down applications... 00:05:26.256 15:53:24 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.256 15:53:24 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:26.256 15:53:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:26.256 15:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:26.256 15:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:26.256 15:53:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:26.256 15:53:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:26.256 15:53:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57900 ]] 00:05:26.256 15:53:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57900 00:05:26.256 15:53:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:26.256 15:53:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.256 15:53:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57900 00:05:26.256 15:53:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.822 15:53:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.822 15:53:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.822 15:53:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57900 00:05:26.822 15:53:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:26.822 15:53:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:26.822 15:53:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:26.822 15:53:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:26.822 SPDK target shutdown done 00:05:26.822 15:53:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:26.822 Success 00:05:26.822 00:05:26.822 real 0m1.834s 00:05:26.822 user 0m1.773s 00:05:26.822 sys 0m0.471s 00:05:26.822 15:53:24 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.822 ************************************ 00:05:26.822 END TEST json_config_extra_key 00:05:26.822 ************************************ 00:05:26.822 15:53:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.822 15:53:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.822 15:53:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.822 15:53:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.822 15:53:24 -- common/autotest_common.sh@10 -- # set +x 00:05:26.822 ************************************ 00:05:26.822 START TEST alias_rpc 00:05:26.822 ************************************ 00:05:26.822 15:53:24 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.822 * Looking for test storage... 00:05:26.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:26.822 15:53:24 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.822 15:53:24 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.822 15:53:24 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.822 15:53:25 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:26.822 15:53:25 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.823 15:53:25 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:26.823 15:53:25 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.823 15:53:25 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:26.823 15:53:25 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:26.823 15:53:25 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.823 15:53:25 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:26.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.823 15:53:25 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.823 15:53:25 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.823 15:53:25 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.823 15:53:25 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:26.823 15:53:25 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.823 15:53:25 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.823 --rc genhtml_branch_coverage=1 00:05:26.823 --rc genhtml_function_coverage=1 00:05:26.823 --rc genhtml_legend=1 00:05:26.823 --rc geninfo_all_blocks=1 00:05:26.823 --rc geninfo_unexecuted_blocks=1 00:05:26.823 00:05:26.823 ' 00:05:26.823 15:53:25 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.823 --rc genhtml_branch_coverage=1 00:05:26.823 --rc genhtml_function_coverage=1 00:05:26.823 --rc genhtml_legend=1 00:05:26.823 --rc geninfo_all_blocks=1 00:05:26.823 --rc geninfo_unexecuted_blocks=1 00:05:26.823 00:05:26.823 ' 00:05:26.823 15:53:25 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.823 --rc genhtml_branch_coverage=1 00:05:26.823 --rc genhtml_function_coverage=1 00:05:26.823 --rc genhtml_legend=1 00:05:26.823 --rc geninfo_all_blocks=1 00:05:26.823 --rc geninfo_unexecuted_blocks=1 00:05:26.823 00:05:26.823 ' 00:05:26.823 15:53:25 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.823 --rc genhtml_branch_coverage=1 00:05:26.823 --rc genhtml_function_coverage=1 00:05:26.823 --rc genhtml_legend=1 00:05:26.823 --rc geninfo_all_blocks=1 00:05:26.823 --rc geninfo_unexecuted_blocks=1 00:05:26.823 00:05:26.823 ' 00:05:26.823 15:53:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:26.823 15:53:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57978 00:05:26.823 15:53:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57978 00:05:26.823 15:53:25 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57978 ']' 00:05:26.823 15:53:25 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.823 15:53:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.823 15:53:25 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.823 15:53:25 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.823 15:53:25 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.823 15:53:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.081 [2024-11-20 15:53:25.108971] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:27.081 [2024-11-20 15:53:25.109551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57978 ] 00:05:27.081 [2024-11-20 15:53:25.260895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.081 [2024-11-20 15:53:25.325110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.339 [2024-11-20 15:53:25.398127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.596 15:53:25 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.596 15:53:25 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:27.596 15:53:25 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:27.854 15:53:25 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57978 00:05:27.854 15:53:25 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57978 ']' 00:05:27.854 15:53:25 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57978 00:05:27.854 15:53:25 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.854 15:53:25 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.854 15:53:25 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57978 00:05:27.854 killing process with pid 57978 00:05:27.854 15:53:25 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.854 15:53:25 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.854 15:53:25 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57978' 00:05:27.854 15:53:25 alias_rpc -- common/autotest_common.sh@973 -- # kill 57978 00:05:27.854 15:53:25 alias_rpc -- common/autotest_common.sh@978 -- # wait 57978 00:05:28.421 ************************************ 00:05:28.421 END TEST alias_rpc 00:05:28.422 ************************************ 00:05:28.422 00:05:28.422 real 0m1.524s 00:05:28.422 user 0m1.611s 00:05:28.422 sys 0m0.448s 00:05:28.422 15:53:26 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.422 15:53:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.422 15:53:26 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:28.422 15:53:26 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:28.422 15:53:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.422 15:53:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.422 15:53:26 -- common/autotest_common.sh@10 -- # set +x 00:05:28.422 ************************************ 00:05:28.422 START TEST spdkcli_tcp 00:05:28.422 ************************************ 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:28.422 * Looking for test storage... 00:05:28.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.422 15:53:26 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.422 --rc genhtml_branch_coverage=1 00:05:28.422 --rc genhtml_function_coverage=1 00:05:28.422 --rc genhtml_legend=1 00:05:28.422 --rc geninfo_all_blocks=1 00:05:28.422 --rc geninfo_unexecuted_blocks=1 00:05:28.422 00:05:28.422 ' 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.422 --rc genhtml_branch_coverage=1 00:05:28.422 --rc genhtml_function_coverage=1 00:05:28.422 --rc genhtml_legend=1 00:05:28.422 --rc geninfo_all_blocks=1 00:05:28.422 --rc geninfo_unexecuted_blocks=1 00:05:28.422 00:05:28.422 ' 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.422 --rc genhtml_branch_coverage=1 00:05:28.422 --rc genhtml_function_coverage=1 00:05:28.422 --rc genhtml_legend=1 00:05:28.422 --rc geninfo_all_blocks=1 00:05:28.422 --rc geninfo_unexecuted_blocks=1 00:05:28.422 00:05:28.422 ' 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.422 --rc genhtml_branch_coverage=1 00:05:28.422 --rc genhtml_function_coverage=1 00:05:28.422 --rc genhtml_legend=1 00:05:28.422 --rc geninfo_all_blocks=1 00:05:28.422 --rc geninfo_unexecuted_blocks=1 00:05:28.422 00:05:28.422 ' 00:05:28.422 15:53:26 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:28.422 15:53:26 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:28.422 15:53:26 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:28.422 15:53:26 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:28.422 15:53:26 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:28.422 15:53:26 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:28.422 15:53:26 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.422 15:53:26 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58049 00:05:28.422 15:53:26 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:28.422 15:53:26 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58049 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58049 ']' 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.422 15:53:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.681 [2024-11-20 15:53:26.687048] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:28.681 [2024-11-20 15:53:26.687411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58049 ] 00:05:28.681 [2024-11-20 15:53:26.834117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.681 [2024-11-20 15:53:26.887201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.681 [2024-11-20 15:53:26.887210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.939 [2024-11-20 15:53:26.964620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:28.940 15:53:27 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.940 15:53:27 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:28.940 15:53:27 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58064 00:05:28.940 15:53:27 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:28.940 15:53:27 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:29.506 [ 00:05:29.506 "bdev_malloc_delete", 00:05:29.506 "bdev_malloc_create", 00:05:29.506 "bdev_null_resize", 00:05:29.506 "bdev_null_delete", 00:05:29.506 "bdev_null_create", 00:05:29.506 "bdev_nvme_cuse_unregister", 00:05:29.506 "bdev_nvme_cuse_register", 00:05:29.506 "bdev_opal_new_user", 00:05:29.506 "bdev_opal_set_lock_state", 00:05:29.506 "bdev_opal_delete", 00:05:29.506 "bdev_opal_get_info", 00:05:29.506 "bdev_opal_create", 00:05:29.507 "bdev_nvme_opal_revert", 00:05:29.507 "bdev_nvme_opal_init", 00:05:29.507 "bdev_nvme_send_cmd", 00:05:29.507 "bdev_nvme_set_keys", 00:05:29.507 "bdev_nvme_get_path_iostat", 00:05:29.507 "bdev_nvme_get_mdns_discovery_info", 00:05:29.507 "bdev_nvme_stop_mdns_discovery", 00:05:29.507 "bdev_nvme_start_mdns_discovery", 00:05:29.507 "bdev_nvme_set_multipath_policy", 00:05:29.507 "bdev_nvme_set_preferred_path", 00:05:29.507 "bdev_nvme_get_io_paths", 00:05:29.507 "bdev_nvme_remove_error_injection", 00:05:29.507 "bdev_nvme_add_error_injection", 00:05:29.507 "bdev_nvme_get_discovery_info", 00:05:29.507 "bdev_nvme_stop_discovery", 00:05:29.507 "bdev_nvme_start_discovery", 00:05:29.507 "bdev_nvme_get_controller_health_info", 00:05:29.507 "bdev_nvme_disable_controller", 00:05:29.507 "bdev_nvme_enable_controller", 00:05:29.507 "bdev_nvme_reset_controller", 00:05:29.507 "bdev_nvme_get_transport_statistics", 00:05:29.507 "bdev_nvme_apply_firmware", 00:05:29.507 "bdev_nvme_detach_controller", 00:05:29.507 "bdev_nvme_get_controllers", 00:05:29.507 "bdev_nvme_attach_controller", 00:05:29.507 "bdev_nvme_set_hotplug", 00:05:29.507 "bdev_nvme_set_options", 00:05:29.507 "bdev_passthru_delete", 00:05:29.507 "bdev_passthru_create", 00:05:29.507 "bdev_lvol_set_parent_bdev", 00:05:29.507 "bdev_lvol_set_parent", 00:05:29.507 "bdev_lvol_check_shallow_copy", 00:05:29.507 "bdev_lvol_start_shallow_copy", 00:05:29.507 "bdev_lvol_grow_lvstore", 00:05:29.507 "bdev_lvol_get_lvols", 00:05:29.507 "bdev_lvol_get_lvstores", 00:05:29.507 "bdev_lvol_delete", 00:05:29.507 "bdev_lvol_set_read_only", 00:05:29.507 "bdev_lvol_resize", 00:05:29.507 "bdev_lvol_decouple_parent", 00:05:29.507 "bdev_lvol_inflate", 00:05:29.507 "bdev_lvol_rename", 00:05:29.507 "bdev_lvol_clone_bdev", 00:05:29.507 "bdev_lvol_clone", 00:05:29.507 "bdev_lvol_snapshot", 00:05:29.507 "bdev_lvol_create", 00:05:29.507 "bdev_lvol_delete_lvstore", 00:05:29.507 "bdev_lvol_rename_lvstore", 00:05:29.507 "bdev_lvol_create_lvstore", 00:05:29.507 "bdev_raid_set_options", 00:05:29.507 "bdev_raid_remove_base_bdev", 00:05:29.507 "bdev_raid_add_base_bdev", 00:05:29.507 "bdev_raid_delete", 00:05:29.507 "bdev_raid_create", 00:05:29.507 "bdev_raid_get_bdevs", 00:05:29.507 "bdev_error_inject_error", 00:05:29.507 "bdev_error_delete", 00:05:29.507 "bdev_error_create", 00:05:29.507 "bdev_split_delete", 00:05:29.507 "bdev_split_create", 00:05:29.507 "bdev_delay_delete", 00:05:29.507 "bdev_delay_create", 00:05:29.507 "bdev_delay_update_latency", 00:05:29.507 "bdev_zone_block_delete", 00:05:29.507 "bdev_zone_block_create", 00:05:29.507 "blobfs_create", 00:05:29.507 "blobfs_detect", 00:05:29.507 "blobfs_set_cache_size", 00:05:29.507 "bdev_aio_delete", 00:05:29.507 "bdev_aio_rescan", 00:05:29.507 "bdev_aio_create", 00:05:29.507 "bdev_ftl_set_property", 00:05:29.507 "bdev_ftl_get_properties", 00:05:29.507 "bdev_ftl_get_stats", 00:05:29.507 "bdev_ftl_unmap", 00:05:29.507 "bdev_ftl_unload", 00:05:29.507 "bdev_ftl_delete", 00:05:29.507 "bdev_ftl_load", 00:05:29.507 "bdev_ftl_create", 00:05:29.507 "bdev_virtio_attach_controller", 00:05:29.507 "bdev_virtio_scsi_get_devices", 00:05:29.507 "bdev_virtio_detach_controller", 00:05:29.507 "bdev_virtio_blk_set_hotplug", 00:05:29.507 "bdev_iscsi_delete", 00:05:29.507 "bdev_iscsi_create", 00:05:29.507 "bdev_iscsi_set_options", 00:05:29.507 "bdev_uring_delete", 00:05:29.507 "bdev_uring_rescan", 00:05:29.507 "bdev_uring_create", 00:05:29.507 "accel_error_inject_error", 00:05:29.507 "ioat_scan_accel_module", 00:05:29.507 "dsa_scan_accel_module", 00:05:29.507 "iaa_scan_accel_module", 00:05:29.507 "keyring_file_remove_key", 00:05:29.507 "keyring_file_add_key", 00:05:29.507 "keyring_linux_set_options", 00:05:29.507 "fsdev_aio_delete", 00:05:29.507 "fsdev_aio_create", 00:05:29.507 "iscsi_get_histogram", 00:05:29.507 "iscsi_enable_histogram", 00:05:29.507 "iscsi_set_options", 00:05:29.507 "iscsi_get_auth_groups", 00:05:29.507 "iscsi_auth_group_remove_secret", 00:05:29.507 "iscsi_auth_group_add_secret", 00:05:29.507 "iscsi_delete_auth_group", 00:05:29.507 "iscsi_create_auth_group", 00:05:29.507 "iscsi_set_discovery_auth", 00:05:29.507 "iscsi_get_options", 00:05:29.507 "iscsi_target_node_request_logout", 00:05:29.507 "iscsi_target_node_set_redirect", 00:05:29.507 "iscsi_target_node_set_auth", 00:05:29.507 "iscsi_target_node_add_lun", 00:05:29.507 "iscsi_get_stats", 00:05:29.507 "iscsi_get_connections", 00:05:29.507 "iscsi_portal_group_set_auth", 00:05:29.507 "iscsi_start_portal_group", 00:05:29.507 "iscsi_delete_portal_group", 00:05:29.507 "iscsi_create_portal_group", 00:05:29.507 "iscsi_get_portal_groups", 00:05:29.507 "iscsi_delete_target_node", 00:05:29.507 "iscsi_target_node_remove_pg_ig_maps", 00:05:29.507 "iscsi_target_node_add_pg_ig_maps", 00:05:29.507 "iscsi_create_target_node", 00:05:29.507 "iscsi_get_target_nodes", 00:05:29.507 "iscsi_delete_initiator_group", 00:05:29.507 "iscsi_initiator_group_remove_initiators", 00:05:29.507 "iscsi_initiator_group_add_initiators", 00:05:29.507 "iscsi_create_initiator_group", 00:05:29.507 "iscsi_get_initiator_groups", 00:05:29.507 "nvmf_set_crdt", 00:05:29.507 "nvmf_set_config", 00:05:29.507 "nvmf_set_max_subsystems", 00:05:29.507 "nvmf_stop_mdns_prr", 00:05:29.507 "nvmf_publish_mdns_prr", 00:05:29.507 "nvmf_subsystem_get_listeners", 00:05:29.507 "nvmf_subsystem_get_qpairs", 00:05:29.507 "nvmf_subsystem_get_controllers", 00:05:29.507 "nvmf_get_stats", 00:05:29.507 "nvmf_get_transports", 00:05:29.507 "nvmf_create_transport", 00:05:29.507 "nvmf_get_targets", 00:05:29.507 "nvmf_delete_target", 00:05:29.507 "nvmf_create_target", 00:05:29.507 "nvmf_subsystem_allow_any_host", 00:05:29.507 "nvmf_subsystem_set_keys", 00:05:29.507 "nvmf_subsystem_remove_host", 00:05:29.507 "nvmf_subsystem_add_host", 00:05:29.507 "nvmf_ns_remove_host", 00:05:29.507 "nvmf_ns_add_host", 00:05:29.507 "nvmf_subsystem_remove_ns", 00:05:29.507 "nvmf_subsystem_set_ns_ana_group", 00:05:29.507 "nvmf_subsystem_add_ns", 00:05:29.507 "nvmf_subsystem_listener_set_ana_state", 00:05:29.507 "nvmf_discovery_get_referrals", 00:05:29.507 "nvmf_discovery_remove_referral", 00:05:29.507 "nvmf_discovery_add_referral", 00:05:29.507 "nvmf_subsystem_remove_listener", 00:05:29.507 "nvmf_subsystem_add_listener", 00:05:29.507 "nvmf_delete_subsystem", 00:05:29.507 "nvmf_create_subsystem", 00:05:29.507 "nvmf_get_subsystems", 00:05:29.507 "env_dpdk_get_mem_stats", 00:05:29.507 "nbd_get_disks", 00:05:29.507 "nbd_stop_disk", 00:05:29.507 "nbd_start_disk", 00:05:29.507 "ublk_recover_disk", 00:05:29.507 "ublk_get_disks", 00:05:29.507 "ublk_stop_disk", 00:05:29.507 "ublk_start_disk", 00:05:29.507 "ublk_destroy_target", 00:05:29.507 "ublk_create_target", 00:05:29.507 "virtio_blk_create_transport", 00:05:29.507 "virtio_blk_get_transports", 00:05:29.507 "vhost_controller_set_coalescing", 00:05:29.507 "vhost_get_controllers", 00:05:29.507 "vhost_delete_controller", 00:05:29.507 "vhost_create_blk_controller", 00:05:29.507 "vhost_scsi_controller_remove_target", 00:05:29.507 "vhost_scsi_controller_add_target", 00:05:29.507 "vhost_start_scsi_controller", 00:05:29.507 "vhost_create_scsi_controller", 00:05:29.507 "thread_set_cpumask", 00:05:29.507 "scheduler_set_options", 00:05:29.507 "framework_get_governor", 00:05:29.507 "framework_get_scheduler", 00:05:29.507 "framework_set_scheduler", 00:05:29.507 "framework_get_reactors", 00:05:29.507 "thread_get_io_channels", 00:05:29.507 "thread_get_pollers", 00:05:29.507 "thread_get_stats", 00:05:29.507 "framework_monitor_context_switch", 00:05:29.507 "spdk_kill_instance", 00:05:29.507 "log_enable_timestamps", 00:05:29.507 "log_get_flags", 00:05:29.507 "log_clear_flag", 00:05:29.507 "log_set_flag", 00:05:29.507 "log_get_level", 00:05:29.507 "log_set_level", 00:05:29.507 "log_get_print_level", 00:05:29.507 "log_set_print_level", 00:05:29.507 "framework_enable_cpumask_locks", 00:05:29.507 "framework_disable_cpumask_locks", 00:05:29.507 "framework_wait_init", 00:05:29.507 "framework_start_init", 00:05:29.507 "scsi_get_devices", 00:05:29.507 "bdev_get_histogram", 00:05:29.507 "bdev_enable_histogram", 00:05:29.507 "bdev_set_qos_limit", 00:05:29.507 "bdev_set_qd_sampling_period", 00:05:29.507 "bdev_get_bdevs", 00:05:29.507 "bdev_reset_iostat", 00:05:29.507 "bdev_get_iostat", 00:05:29.507 "bdev_examine", 00:05:29.507 "bdev_wait_for_examine", 00:05:29.507 "bdev_set_options", 00:05:29.507 "accel_get_stats", 00:05:29.507 "accel_set_options", 00:05:29.507 "accel_set_driver", 00:05:29.507 "accel_crypto_key_destroy", 00:05:29.507 "accel_crypto_keys_get", 00:05:29.507 "accel_crypto_key_create", 00:05:29.507 "accel_assign_opc", 00:05:29.507 "accel_get_module_info", 00:05:29.508 "accel_get_opc_assignments", 00:05:29.508 "vmd_rescan", 00:05:29.508 "vmd_remove_device", 00:05:29.508 "vmd_enable", 00:05:29.508 "sock_get_default_impl", 00:05:29.508 "sock_set_default_impl", 00:05:29.508 "sock_impl_set_options", 00:05:29.508 "sock_impl_get_options", 00:05:29.508 "iobuf_get_stats", 00:05:29.508 "iobuf_set_options", 00:05:29.508 "keyring_get_keys", 00:05:29.508 "framework_get_pci_devices", 00:05:29.508 "framework_get_config", 00:05:29.508 "framework_get_subsystems", 00:05:29.508 "fsdev_set_opts", 00:05:29.508 "fsdev_get_opts", 00:05:29.508 "trace_get_info", 00:05:29.508 "trace_get_tpoint_group_mask", 00:05:29.508 "trace_disable_tpoint_group", 00:05:29.508 "trace_enable_tpoint_group", 00:05:29.508 "trace_clear_tpoint_mask", 00:05:29.508 "trace_set_tpoint_mask", 00:05:29.508 "notify_get_notifications", 00:05:29.508 "notify_get_types", 00:05:29.508 "spdk_get_version", 00:05:29.508 "rpc_get_methods" 00:05:29.508 ] 00:05:29.508 15:53:27 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:29.508 15:53:27 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.508 15:53:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.508 15:53:27 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:29.508 15:53:27 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58049 00:05:29.508 15:53:27 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58049 ']' 00:05:29.508 15:53:27 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58049 00:05:29.508 15:53:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:29.508 15:53:27 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.508 15:53:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58049 00:05:29.508 killing process with pid 58049 00:05:29.508 15:53:27 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.508 15:53:27 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.508 15:53:27 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58049' 00:05:29.508 15:53:27 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58049 00:05:29.508 15:53:27 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58049 00:05:29.766 ************************************ 00:05:29.766 END TEST spdkcli_tcp 00:05:29.766 ************************************ 00:05:29.766 00:05:29.766 real 0m1.535s 00:05:29.766 user 0m2.600s 00:05:29.766 sys 0m0.512s 00:05:29.766 15:53:27 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.766 15:53:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.766 15:53:27 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.766 15:53:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.766 15:53:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.766 15:53:27 -- common/autotest_common.sh@10 -- # set +x 00:05:29.766 ************************************ 00:05:29.766 START TEST dpdk_mem_utility 00:05:29.766 ************************************ 00:05:29.766 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.024 * Looking for test storage... 00:05:30.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:30.024 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.024 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.024 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.024 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.024 15:53:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:30.025 15:53:28 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.025 15:53:28 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:30.025 15:53:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:30.025 15:53:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.025 15:53:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:30.025 15:53:28 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.025 15:53:28 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.025 15:53:28 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.025 15:53:28 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:30.025 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.025 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.025 --rc genhtml_branch_coverage=1 00:05:30.025 --rc genhtml_function_coverage=1 00:05:30.025 --rc genhtml_legend=1 00:05:30.025 --rc geninfo_all_blocks=1 00:05:30.025 --rc geninfo_unexecuted_blocks=1 00:05:30.025 00:05:30.025 ' 00:05:30.025 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.025 --rc genhtml_branch_coverage=1 00:05:30.025 --rc genhtml_function_coverage=1 00:05:30.025 --rc genhtml_legend=1 00:05:30.025 --rc geninfo_all_blocks=1 00:05:30.025 --rc geninfo_unexecuted_blocks=1 00:05:30.025 00:05:30.025 ' 00:05:30.025 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.025 --rc genhtml_branch_coverage=1 00:05:30.025 --rc genhtml_function_coverage=1 00:05:30.025 --rc genhtml_legend=1 00:05:30.025 --rc geninfo_all_blocks=1 00:05:30.025 --rc geninfo_unexecuted_blocks=1 00:05:30.025 00:05:30.025 ' 00:05:30.025 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.025 --rc genhtml_branch_coverage=1 00:05:30.025 --rc genhtml_function_coverage=1 00:05:30.025 --rc genhtml_legend=1 00:05:30.025 --rc geninfo_all_blocks=1 00:05:30.025 --rc geninfo_unexecuted_blocks=1 00:05:30.025 00:05:30.025 ' 00:05:30.025 15:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:30.025 15:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58146 00:05:30.025 15:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.025 15:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58146 00:05:30.025 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58146 ']' 00:05:30.025 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.025 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.025 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.025 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.025 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.025 [2024-11-20 15:53:28.268454] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:30.025 [2024-11-20 15:53:28.268849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58146 ] 00:05:30.283 [2024-11-20 15:53:28.407338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.283 [2024-11-20 15:53:28.467859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.541 [2024-11-20 15:53:28.538863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:30.541 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.541 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:30.541 15:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:30.541 15:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:30.541 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.541 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.541 { 00:05:30.541 "filename": "/tmp/spdk_mem_dump.txt" 00:05:30.541 } 00:05:30.541 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.541 15:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:30.800 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:30.800 1 heaps totaling size 818.000000 MiB 00:05:30.800 size: 818.000000 MiB heap id: 0 00:05:30.800 end heaps---------- 00:05:30.800 9 mempools totaling size 603.782043 MiB 00:05:30.800 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:30.800 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:30.800 size: 100.555481 MiB name: bdev_io_58146 00:05:30.800 size: 50.003479 MiB name: msgpool_58146 00:05:30.800 size: 36.509338 MiB name: fsdev_io_58146 00:05:30.800 size: 21.763794 MiB name: PDU_Pool 00:05:30.800 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:30.800 size: 4.133484 MiB name: evtpool_58146 00:05:30.800 size: 0.026123 MiB name: Session_Pool 00:05:30.800 end mempools------- 00:05:30.800 6 memzones totaling size 4.142822 MiB 00:05:30.800 size: 1.000366 MiB name: RG_ring_0_58146 00:05:30.800 size: 1.000366 MiB name: RG_ring_1_58146 00:05:30.800 size: 1.000366 MiB name: RG_ring_4_58146 00:05:30.800 size: 1.000366 MiB name: RG_ring_5_58146 00:05:30.800 size: 0.125366 MiB name: RG_ring_2_58146 00:05:30.800 size: 0.015991 MiB name: RG_ring_3_58146 00:05:30.800 end memzones------- 00:05:30.800 15:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:30.801 heap id: 0 total size: 818.000000 MiB number of busy elements: 316 number of free elements: 15 00:05:30.801 list of free elements. size: 10.802673 MiB 00:05:30.801 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:30.801 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:30.801 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:30.801 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:30.801 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:30.801 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:30.801 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:30.801 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:30.801 element at address: 0x20001ae00000 with size: 0.567871 MiB 00:05:30.801 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:30.801 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:30.801 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:30.801 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:30.801 element at address: 0x200028200000 with size: 0.395752 MiB 00:05:30.801 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:30.801 list of standard malloc elements. size: 199.268433 MiB 00:05:30.801 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:30.801 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:30.801 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:30.801 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:30.801 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:30.801 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:30.801 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:30.801 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:30.801 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:30.801 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:30.801 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:30.801 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:30.802 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:30.802 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:30.802 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:30.802 element at address: 0x200028265500 with size: 0.000183 MiB 00:05:30.802 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20002826c480 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20002826c540 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:30.802 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:30.803 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:30.803 list of memzone associated elements. size: 607.928894 MiB 00:05:30.803 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:30.803 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:30.803 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:30.803 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:30.803 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:30.803 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58146_0 00:05:30.803 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:30.803 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58146_0 00:05:30.803 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:30.803 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58146_0 00:05:30.803 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:30.803 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:30.803 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:30.803 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:30.803 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:30.803 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58146_0 00:05:30.803 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:30.803 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58146 00:05:30.803 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:30.803 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58146 00:05:30.803 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:30.803 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:30.803 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:30.803 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:30.803 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:30.803 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:30.803 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:30.803 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:30.803 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:30.803 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58146 00:05:30.803 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:30.803 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58146 00:05:30.803 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:30.803 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58146 00:05:30.803 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:30.803 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58146 00:05:30.803 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:30.803 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58146 00:05:30.803 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:30.803 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58146 00:05:30.803 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:30.803 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:30.803 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:30.803 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:30.803 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:30.803 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:30.803 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:30.803 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58146 00:05:30.803 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:30.803 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58146 00:05:30.803 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:30.803 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:30.803 element at address: 0x200028265680 with size: 0.023743 MiB 00:05:30.803 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:30.803 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:30.804 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58146 00:05:30.804 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:05:30.804 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:30.804 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:30.804 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58146 00:05:30.804 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:30.804 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58146 00:05:30.804 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:30.804 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58146 00:05:30.804 element at address: 0x20002826c280 with size: 0.000305 MiB 00:05:30.804 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:30.804 15:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:30.804 15:53:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58146 00:05:30.804 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58146 ']' 00:05:30.804 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58146 00:05:30.804 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:30.804 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.804 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58146 00:05:30.804 killing process with pid 58146 00:05:30.804 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.804 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.804 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58146' 00:05:30.804 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58146 00:05:30.804 15:53:28 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58146 00:05:31.369 00:05:31.369 real 0m1.331s 00:05:31.369 user 0m1.267s 00:05:31.369 sys 0m0.420s 00:05:31.369 15:53:29 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.369 ************************************ 00:05:31.369 END TEST dpdk_mem_utility 00:05:31.369 ************************************ 00:05:31.369 15:53:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.369 15:53:29 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:31.369 15:53:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.369 15:53:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.369 15:53:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.369 ************************************ 00:05:31.369 START TEST event 00:05:31.369 ************************************ 00:05:31.369 15:53:29 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:31.369 * Looking for test storage... 00:05:31.369 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.369 15:53:29 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.369 15:53:29 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.369 15:53:29 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.369 15:53:29 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.369 15:53:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.369 15:53:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.369 15:53:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.369 15:53:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.369 15:53:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.369 15:53:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.369 15:53:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.369 15:53:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.369 15:53:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.369 15:53:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.370 15:53:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.370 15:53:29 event -- scripts/common.sh@344 -- # case "$op" in 00:05:31.370 15:53:29 event -- scripts/common.sh@345 -- # : 1 00:05:31.370 15:53:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.370 15:53:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.370 15:53:29 event -- scripts/common.sh@365 -- # decimal 1 00:05:31.370 15:53:29 event -- scripts/common.sh@353 -- # local d=1 00:05:31.370 15:53:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.370 15:53:29 event -- scripts/common.sh@355 -- # echo 1 00:05:31.370 15:53:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.370 15:53:29 event -- scripts/common.sh@366 -- # decimal 2 00:05:31.370 15:53:29 event -- scripts/common.sh@353 -- # local d=2 00:05:31.370 15:53:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.370 15:53:29 event -- scripts/common.sh@355 -- # echo 2 00:05:31.370 15:53:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.370 15:53:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.370 15:53:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.370 15:53:29 event -- scripts/common.sh@368 -- # return 0 00:05:31.370 15:53:29 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.370 15:53:29 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.370 --rc genhtml_branch_coverage=1 00:05:31.370 --rc genhtml_function_coverage=1 00:05:31.370 --rc genhtml_legend=1 00:05:31.370 --rc geninfo_all_blocks=1 00:05:31.370 --rc geninfo_unexecuted_blocks=1 00:05:31.370 00:05:31.370 ' 00:05:31.370 15:53:29 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.370 --rc genhtml_branch_coverage=1 00:05:31.370 --rc genhtml_function_coverage=1 00:05:31.370 --rc genhtml_legend=1 00:05:31.370 --rc geninfo_all_blocks=1 00:05:31.370 --rc geninfo_unexecuted_blocks=1 00:05:31.370 00:05:31.370 ' 00:05:31.370 15:53:29 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.370 --rc genhtml_branch_coverage=1 00:05:31.370 --rc genhtml_function_coverage=1 00:05:31.370 --rc genhtml_legend=1 00:05:31.370 --rc geninfo_all_blocks=1 00:05:31.370 --rc geninfo_unexecuted_blocks=1 00:05:31.370 00:05:31.370 ' 00:05:31.370 15:53:29 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.370 --rc genhtml_branch_coverage=1 00:05:31.370 --rc genhtml_function_coverage=1 00:05:31.370 --rc genhtml_legend=1 00:05:31.370 --rc geninfo_all_blocks=1 00:05:31.370 --rc geninfo_unexecuted_blocks=1 00:05:31.370 00:05:31.370 ' 00:05:31.370 15:53:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:31.370 15:53:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.370 15:53:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.370 15:53:29 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:31.370 15:53:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.370 15:53:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.370 ************************************ 00:05:31.370 START TEST event_perf 00:05:31.370 ************************************ 00:05:31.370 15:53:29 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.370 Running I/O for 1 seconds...[2024-11-20 15:53:29.611740] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:31.370 [2024-11-20 15:53:29.612340] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58218 ] 00:05:31.661 [2024-11-20 15:53:29.760402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.661 [2024-11-20 15:53:29.826987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.661 [2024-11-20 15:53:29.827163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.661 [2024-11-20 15:53:29.827305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.661 [2024-11-20 15:53:29.827310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.046 Running I/O for 1 seconds... 00:05:33.046 lcore 0: 194093 00:05:33.046 lcore 1: 194092 00:05:33.046 lcore 2: 194093 00:05:33.046 lcore 3: 194094 00:05:33.046 done. 00:05:33.046 ************************************ 00:05:33.046 END TEST event_perf 00:05:33.046 ************************************ 00:05:33.046 00:05:33.046 real 0m1.303s 00:05:33.046 user 0m4.130s 00:05:33.046 sys 0m0.050s 00:05:33.046 15:53:30 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.046 15:53:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.046 15:53:30 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:33.046 15:53:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:33.046 15:53:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.046 15:53:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.046 ************************************ 00:05:33.046 START TEST event_reactor 00:05:33.046 ************************************ 00:05:33.046 15:53:30 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:33.046 [2024-11-20 15:53:30.965545] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:33.046 [2024-11-20 15:53:30.965679] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58256 ] 00:05:33.046 [2024-11-20 15:53:31.112023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.046 [2024-11-20 15:53:31.176609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.419 test_start 00:05:34.419 oneshot 00:05:34.419 tick 100 00:05:34.419 tick 100 00:05:34.419 tick 250 00:05:34.419 tick 100 00:05:34.419 tick 100 00:05:34.419 tick 250 00:05:34.419 tick 100 00:05:34.419 tick 500 00:05:34.419 tick 100 00:05:34.419 tick 100 00:05:34.419 tick 250 00:05:34.419 tick 100 00:05:34.419 tick 100 00:05:34.419 test_end 00:05:34.419 ************************************ 00:05:34.419 END TEST event_reactor 00:05:34.419 ************************************ 00:05:34.419 00:05:34.419 real 0m1.298s 00:05:34.419 user 0m1.145s 00:05:34.419 sys 0m0.048s 00:05:34.419 15:53:32 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.419 15:53:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:34.419 15:53:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.419 15:53:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:34.419 15:53:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.419 15:53:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.419 ************************************ 00:05:34.419 START TEST event_reactor_perf 00:05:34.419 ************************************ 00:05:34.419 15:53:32 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.419 [2024-11-20 15:53:32.311373] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:34.419 [2024-11-20 15:53:32.311479] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58292 ] 00:05:34.419 [2024-11-20 15:53:32.459687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.419 [2024-11-20 15:53:32.520465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.353 test_start 00:05:35.353 test_end 00:05:35.353 Performance: 395633 events per second 00:05:35.353 00:05:35.353 real 0m1.286s 00:05:35.353 user 0m1.123s 00:05:35.353 sys 0m0.057s 00:05:35.353 15:53:33 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.353 15:53:33 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.353 ************************************ 00:05:35.353 END TEST event_reactor_perf 00:05:35.353 ************************************ 00:05:35.612 15:53:33 event -- event/event.sh@49 -- # uname -s 00:05:35.612 15:53:33 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:35.612 15:53:33 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:35.612 15:53:33 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.612 15:53:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.612 15:53:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.612 ************************************ 00:05:35.612 START TEST event_scheduler 00:05:35.612 ************************************ 00:05:35.612 15:53:33 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:35.612 * Looking for test storage... 00:05:35.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:35.612 15:53:33 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.612 15:53:33 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.612 15:53:33 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.870 15:53:33 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.870 15:53:33 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:35.870 15:53:33 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.870 15:53:33 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.870 --rc genhtml_branch_coverage=1 00:05:35.870 --rc genhtml_function_coverage=1 00:05:35.870 --rc genhtml_legend=1 00:05:35.870 --rc geninfo_all_blocks=1 00:05:35.870 --rc geninfo_unexecuted_blocks=1 00:05:35.870 00:05:35.870 ' 00:05:35.870 15:53:33 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.870 --rc genhtml_branch_coverage=1 00:05:35.870 --rc genhtml_function_coverage=1 00:05:35.870 --rc genhtml_legend=1 00:05:35.870 --rc geninfo_all_blocks=1 00:05:35.870 --rc geninfo_unexecuted_blocks=1 00:05:35.870 00:05:35.870 ' 00:05:35.870 15:53:33 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.870 --rc genhtml_branch_coverage=1 00:05:35.870 --rc genhtml_function_coverage=1 00:05:35.870 --rc genhtml_legend=1 00:05:35.870 --rc geninfo_all_blocks=1 00:05:35.870 --rc geninfo_unexecuted_blocks=1 00:05:35.870 00:05:35.870 ' 00:05:35.870 15:53:33 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.871 --rc genhtml_branch_coverage=1 00:05:35.871 --rc genhtml_function_coverage=1 00:05:35.871 --rc genhtml_legend=1 00:05:35.871 --rc geninfo_all_blocks=1 00:05:35.871 --rc geninfo_unexecuted_blocks=1 00:05:35.871 00:05:35.871 ' 00:05:35.871 15:53:33 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:35.871 15:53:33 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58356 00:05:35.871 15:53:33 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:35.871 15:53:33 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.871 15:53:33 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58356 00:05:35.871 15:53:33 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58356 ']' 00:05:35.871 15:53:33 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.871 15:53:33 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.871 15:53:33 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.871 15:53:33 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.871 15:53:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.871 [2024-11-20 15:53:33.937624] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:35.871 [2024-11-20 15:53:33.938566] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58356 ] 00:05:35.871 [2024-11-20 15:53:34.092728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.130 [2024-11-20 15:53:34.168103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.130 [2024-11-20 15:53:34.168214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.130 [2024-11-20 15:53:34.168317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.130 [2024-11-20 15:53:34.168322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.066 15:53:34 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.066 15:53:34 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:37.066 15:53:34 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:37.066 15:53:34 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.066 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.066 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.066 POWER: Cannot set governor of lcore 0 to performance 00:05:37.066 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.066 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.066 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.066 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.066 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:37.066 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:37.066 POWER: Unable to set Power Management Environment for lcore 0 00:05:37.066 [2024-11-20 15:53:34.954783] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:37.066 [2024-11-20 15:53:34.954799] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:37.066 [2024-11-20 15:53:34.954808] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:37.066 [2024-11-20 15:53:34.954820] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:37.066 [2024-11-20 15:53:34.954980] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:37.066 [2024-11-20 15:53:34.954998] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:37.066 15:53:34 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:34 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:37.066 15:53:34 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 [2024-11-20 15:53:35.017802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.066 [2024-11-20 15:53:35.057505] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:37.066 15:53:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:37.066 15:53:35 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.066 15:53:35 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 ************************************ 00:05:37.066 START TEST scheduler_create_thread 00:05:37.066 ************************************ 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 2 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 3 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 4 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 5 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 6 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 7 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 8 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 9 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 10 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.066 15:53:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.443 15:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.443 15:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:38.443 15:53:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:38.443 15:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.443 15:53:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.819 ************************************ 00:05:39.819 END TEST scheduler_create_thread 00:05:39.819 ************************************ 00:05:39.819 15:53:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.819 00:05:39.819 real 0m2.615s 00:05:39.819 user 0m0.014s 00:05:39.819 sys 0m0.008s 00:05:39.819 15:53:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.819 15:53:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.819 15:53:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:39.819 15:53:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58356 00:05:39.819 15:53:37 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58356 ']' 00:05:39.819 15:53:37 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58356 00:05:39.819 15:53:37 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:39.819 15:53:37 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.819 15:53:37 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58356 00:05:39.819 killing process with pid 58356 00:05:39.819 15:53:37 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:39.819 15:53:37 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:39.819 15:53:37 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58356' 00:05:39.819 15:53:37 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58356 00:05:39.819 15:53:37 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58356 00:05:40.077 [2024-11-20 15:53:38.165953] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:40.336 ************************************ 00:05:40.336 END TEST event_scheduler 00:05:40.336 ************************************ 00:05:40.336 00:05:40.336 real 0m4.748s 00:05:40.336 user 0m8.948s 00:05:40.336 sys 0m0.414s 00:05:40.336 15:53:38 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.336 15:53:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.336 15:53:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:40.336 15:53:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:40.336 15:53:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.336 15:53:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.336 15:53:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.336 ************************************ 00:05:40.336 START TEST app_repeat 00:05:40.336 ************************************ 00:05:40.336 15:53:38 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:40.336 Process app_repeat pid: 58461 00:05:40.336 spdk_app_start Round 0 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58461 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58461' 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:40.336 15:53:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58461 /var/tmp/spdk-nbd.sock 00:05:40.336 15:53:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58461 ']' 00:05:40.336 15:53:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.336 15:53:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.336 15:53:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.336 15:53:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.336 15:53:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.336 [2024-11-20 15:53:38.479018] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:05:40.336 [2024-11-20 15:53:38.479109] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58461 ] 00:05:40.595 [2024-11-20 15:53:38.626727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.595 [2024-11-20 15:53:38.689032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.595 [2024-11-20 15:53:38.689042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.595 [2024-11-20 15:53:38.747646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.595 15:53:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.595 15:53:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.595 15:53:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.858 Malloc0 00:05:40.858 15:53:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:41.423 Malloc1 00:05:41.423 15:53:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.423 15:53:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:41.682 /dev/nbd0 00:05:41.682 15:53:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:41.682 15:53:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.682 1+0 records in 00:05:41.682 1+0 records out 00:05:41.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328265 s, 12.5 MB/s 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.682 15:53:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.682 15:53:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.682 15:53:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.682 15:53:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.941 /dev/nbd1 00:05:41.941 15:53:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.941 15:53:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.941 1+0 records in 00:05:41.941 1+0 records out 00:05:41.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325332 s, 12.6 MB/s 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.941 15:53:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.941 15:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.941 15:53:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.941 15:53:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.941 15:53:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.941 15:53:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.255 15:53:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:42.255 { 00:05:42.255 "nbd_device": "/dev/nbd0", 00:05:42.255 "bdev_name": "Malloc0" 00:05:42.255 }, 00:05:42.255 { 00:05:42.255 "nbd_device": "/dev/nbd1", 00:05:42.255 "bdev_name": "Malloc1" 00:05:42.255 } 00:05:42.255 ]' 00:05:42.255 15:53:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:42.255 { 00:05:42.255 "nbd_device": "/dev/nbd0", 00:05:42.255 "bdev_name": "Malloc0" 00:05:42.255 }, 00:05:42.255 { 00:05:42.255 "nbd_device": "/dev/nbd1", 00:05:42.256 "bdev_name": "Malloc1" 00:05:42.256 } 00:05:42.256 ]' 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:42.256 /dev/nbd1' 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:42.256 /dev/nbd1' 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:42.256 256+0 records in 00:05:42.256 256+0 records out 00:05:42.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109167 s, 96.1 MB/s 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:42.256 256+0 records in 00:05:42.256 256+0 records out 00:05:42.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243325 s, 43.1 MB/s 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:42.256 15:53:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:42.514 256+0 records in 00:05:42.514 256+0 records out 00:05:42.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262271 s, 40.0 MB/s 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.514 15:53:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:42.772 15:53:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:42.772 15:53:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:42.772 15:53:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:42.772 15:53:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.772 15:53:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.772 15:53:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:42.772 15:53:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.772 15:53:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.772 15:53:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:42.772 15:53:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.029 15:53:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.029 15:53:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.029 15:53:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.029 15:53:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.029 15:53:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.029 15:53:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.029 15:53:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.029 15:53:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.029 15:53:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.029 15:53:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.029 15:53:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.287 15:53:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.287 15:53:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.287 15:53:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.287 15:53:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.287 15:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.287 15:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.287 15:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.287 15:53:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.287 15:53:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.287 15:53:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.287 15:53:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.287 15:53:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.287 15:53:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:43.545 15:53:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:43.803 [2024-11-20 15:53:41.904916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.803 [2024-11-20 15:53:41.964476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.804 [2024-11-20 15:53:41.964488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.804 [2024-11-20 15:53:42.020137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:43.804 [2024-11-20 15:53:42.020287] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.804 [2024-11-20 15:53:42.020302] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.188 spdk_app_start Round 1 00:05:47.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.188 15:53:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.188 15:53:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:47.188 15:53:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58461 /var/tmp/spdk-nbd.sock 00:05:47.188 15:53:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58461 ']' 00:05:47.188 15:53:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.188 15:53:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.188 15:53:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.188 15:53:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.188 15:53:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.188 15:53:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.188 15:53:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:47.188 15:53:45 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.188 Malloc0 00:05:47.188 15:53:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.447 Malloc1 00:05:47.447 15:53:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.447 15:53:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.705 /dev/nbd0 00:05:47.705 15:53:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.705 15:53:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.705 1+0 records in 00:05:47.705 1+0 records out 00:05:47.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275028 s, 14.9 MB/s 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.705 15:53:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.705 15:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.705 15:53:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.705 15:53:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.963 /dev/nbd1 00:05:47.963 15:53:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.221 15:53:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.221 1+0 records in 00:05:48.221 1+0 records out 00:05:48.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234437 s, 17.5 MB/s 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.221 15:53:46 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.221 15:53:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.221 15:53:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.221 15:53:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.221 15:53:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.221 15:53:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.480 { 00:05:48.480 "nbd_device": "/dev/nbd0", 00:05:48.480 "bdev_name": "Malloc0" 00:05:48.480 }, 00:05:48.480 { 00:05:48.480 "nbd_device": "/dev/nbd1", 00:05:48.480 "bdev_name": "Malloc1" 00:05:48.480 } 00:05:48.480 ]' 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.480 { 00:05:48.480 "nbd_device": "/dev/nbd0", 00:05:48.480 "bdev_name": "Malloc0" 00:05:48.480 }, 00:05:48.480 { 00:05:48.480 "nbd_device": "/dev/nbd1", 00:05:48.480 "bdev_name": "Malloc1" 00:05:48.480 } 00:05:48.480 ]' 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.480 /dev/nbd1' 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.480 /dev/nbd1' 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.480 256+0 records in 00:05:48.480 256+0 records out 00:05:48.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00600006 s, 175 MB/s 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.480 256+0 records in 00:05:48.480 256+0 records out 00:05:48.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028099 s, 37.3 MB/s 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.480 256+0 records in 00:05:48.480 256+0 records out 00:05:48.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264102 s, 39.7 MB/s 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.480 15:53:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.739 15:53:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.739 15:53:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.739 15:53:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.739 15:53:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.739 15:53:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.739 15:53:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.739 15:53:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.739 15:53:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.739 15:53:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.739 15:53:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.305 15:53:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.305 15:53:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.305 15:53:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.305 15:53:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.305 15:53:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.305 15:53:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.305 15:53:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.305 15:53:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.305 15:53:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.306 15:53:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.306 15:53:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.564 15:53:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.564 15:53:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.564 15:53:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.564 15:53:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.564 15:53:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.564 15:53:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.564 15:53:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.564 15:53:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.564 15:53:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.564 15:53:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.564 15:53:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.564 15:53:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.564 15:53:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.823 15:53:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.081 [2024-11-20 15:53:48.203201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.081 [2024-11-20 15:53:48.256880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.081 [2024-11-20 15:53:48.256887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.081 [2024-11-20 15:53:48.316712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.081 [2024-11-20 15:53:48.316817] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.081 [2024-11-20 15:53:48.316867] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.364 spdk_app_start Round 2 00:05:53.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.364 15:53:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.364 15:53:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:53.364 15:53:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58461 /var/tmp/spdk-nbd.sock 00:05:53.364 15:53:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58461 ']' 00:05:53.364 15:53:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.364 15:53:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.364 15:53:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.364 15:53:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.364 15:53:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.364 15:53:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.364 15:53:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.364 15:53:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.623 Malloc0 00:05:53.623 15:53:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.004 Malloc1 00:05:54.004 15:53:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.004 15:53:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.262 /dev/nbd0 00:05:54.262 15:53:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.262 15:53:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.262 1+0 records in 00:05:54.262 1+0 records out 00:05:54.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543003 s, 7.5 MB/s 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.262 15:53:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.262 15:53:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.262 15:53:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.263 15:53:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.521 /dev/nbd1 00:05:54.521 15:53:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.521 15:53:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.521 1+0 records in 00:05:54.521 1+0 records out 00:05:54.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244767 s, 16.7 MB/s 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.521 15:53:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.521 15:53:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.521 15:53:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.521 15:53:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.521 15:53:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.521 15:53:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.780 15:53:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.780 { 00:05:54.780 "nbd_device": "/dev/nbd0", 00:05:54.780 "bdev_name": "Malloc0" 00:05:54.780 }, 00:05:54.780 { 00:05:54.780 "nbd_device": "/dev/nbd1", 00:05:54.780 "bdev_name": "Malloc1" 00:05:54.780 } 00:05:54.780 ]' 00:05:54.780 15:53:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.780 { 00:05:54.780 "nbd_device": "/dev/nbd0", 00:05:54.780 "bdev_name": "Malloc0" 00:05:54.780 }, 00:05:54.780 { 00:05:54.780 "nbd_device": "/dev/nbd1", 00:05:54.780 "bdev_name": "Malloc1" 00:05:54.780 } 00:05:54.780 ]' 00:05:54.780 15:53:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.780 /dev/nbd1' 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.780 /dev/nbd1' 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.780 15:53:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.040 256+0 records in 00:05:55.040 256+0 records out 00:05:55.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106144 s, 98.8 MB/s 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.040 256+0 records in 00:05:55.040 256+0 records out 00:05:55.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251112 s, 41.8 MB/s 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.040 256+0 records in 00:05:55.040 256+0 records out 00:05:55.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235463 s, 44.5 MB/s 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.040 15:53:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.299 15:53:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.299 15:53:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.299 15:53:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.299 15:53:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.299 15:53:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.299 15:53:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.299 15:53:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.299 15:53:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.299 15:53:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.299 15:53:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.557 15:53:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.557 15:53:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.557 15:53:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.557 15:53:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.557 15:53:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.557 15:53:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.557 15:53:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.557 15:53:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.557 15:53:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.557 15:53:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.557 15:53:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.815 15:53:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.815 15:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.815 15:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.075 15:53:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:56.075 15:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:56.075 15:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.075 15:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:56.075 15:53:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:56.075 15:53:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:56.075 15:53:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:56.075 15:53:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:56.075 15:53:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:56.075 15:53:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.334 15:53:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.593 [2024-11-20 15:53:54.614931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.593 [2024-11-20 15:53:54.663635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.593 [2024-11-20 15:53:54.663647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.593 [2024-11-20 15:53:54.720659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.593 [2024-11-20 15:53:54.720763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.593 [2024-11-20 15:53:54.720778] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.877 15:53:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58461 /var/tmp/spdk-nbd.sock 00:05:59.877 15:53:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58461 ']' 00:05:59.877 15:53:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.877 15:53:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.877 15:53:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.877 15:53:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.877 15:53:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.877 15:53:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.877 15:53:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.877 15:53:57 event.app_repeat -- event/event.sh@39 -- # killprocess 58461 00:05:59.877 15:53:57 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58461 ']' 00:05:59.877 15:53:57 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58461 00:05:59.877 15:53:57 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:59.878 15:53:57 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.878 15:53:57 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58461 00:05:59.878 killing process with pid 58461 00:05:59.878 15:53:57 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.878 15:53:57 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.878 15:53:57 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58461' 00:05:59.878 15:53:57 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58461 00:05:59.878 15:53:57 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58461 00:05:59.878 spdk_app_start is called in Round 0. 00:05:59.878 Shutdown signal received, stop current app iteration 00:05:59.878 Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 reinitialization... 00:05:59.878 spdk_app_start is called in Round 1. 00:05:59.878 Shutdown signal received, stop current app iteration 00:05:59.878 Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 reinitialization... 00:05:59.878 spdk_app_start is called in Round 2. 00:05:59.878 Shutdown signal received, stop current app iteration 00:05:59.878 Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 reinitialization... 00:05:59.878 spdk_app_start is called in Round 3. 00:05:59.878 Shutdown signal received, stop current app iteration 00:05:59.878 15:53:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:59.878 15:53:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:59.878 00:05:59.878 real 0m19.514s 00:05:59.878 user 0m44.681s 00:05:59.878 sys 0m3.004s 00:05:59.878 15:53:57 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.878 15:53:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.878 ************************************ 00:05:59.878 END TEST app_repeat 00:05:59.878 ************************************ 00:05:59.878 15:53:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:59.878 15:53:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:59.878 15:53:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.878 15:53:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.878 15:53:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.878 ************************************ 00:05:59.878 START TEST cpu_locks 00:05:59.878 ************************************ 00:05:59.878 15:53:58 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:59.878 * Looking for test storage... 00:05:59.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:59.878 15:53:58 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.878 15:53:58 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.878 15:53:58 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.137 15:53:58 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.137 15:53:58 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:00.137 15:53:58 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.137 15:53:58 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.137 --rc genhtml_branch_coverage=1 00:06:00.137 --rc genhtml_function_coverage=1 00:06:00.137 --rc genhtml_legend=1 00:06:00.137 --rc geninfo_all_blocks=1 00:06:00.137 --rc geninfo_unexecuted_blocks=1 00:06:00.137 00:06:00.137 ' 00:06:00.137 15:53:58 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.137 --rc genhtml_branch_coverage=1 00:06:00.137 --rc genhtml_function_coverage=1 00:06:00.137 --rc genhtml_legend=1 00:06:00.137 --rc geninfo_all_blocks=1 00:06:00.137 --rc geninfo_unexecuted_blocks=1 00:06:00.137 00:06:00.137 ' 00:06:00.138 15:53:58 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.138 --rc genhtml_branch_coverage=1 00:06:00.138 --rc genhtml_function_coverage=1 00:06:00.138 --rc genhtml_legend=1 00:06:00.138 --rc geninfo_all_blocks=1 00:06:00.138 --rc geninfo_unexecuted_blocks=1 00:06:00.138 00:06:00.138 ' 00:06:00.138 15:53:58 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.138 --rc genhtml_branch_coverage=1 00:06:00.138 --rc genhtml_function_coverage=1 00:06:00.138 --rc genhtml_legend=1 00:06:00.138 --rc geninfo_all_blocks=1 00:06:00.138 --rc geninfo_unexecuted_blocks=1 00:06:00.138 00:06:00.138 ' 00:06:00.138 15:53:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:00.138 15:53:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:00.138 15:53:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:00.138 15:53:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:00.138 15:53:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.138 15:53:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.138 15:53:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.138 ************************************ 00:06:00.138 START TEST default_locks 00:06:00.138 ************************************ 00:06:00.138 15:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:00.138 15:53:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58905 00:06:00.138 15:53:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58905 00:06:00.138 15:53:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.138 15:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58905 ']' 00:06:00.138 15:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.138 15:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.138 15:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.138 15:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.138 15:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.138 [2024-11-20 15:53:58.251375] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:00.138 [2024-11-20 15:53:58.251473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58905 ] 00:06:00.396 [2024-11-20 15:53:58.395797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.396 [2024-11-20 15:53:58.458893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.396 [2024-11-20 15:53:58.535936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.655 15:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.655 15:53:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:00.655 15:53:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58905 00:06:00.655 15:53:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58905 00:06:00.655 15:53:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.220 15:53:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58905 00:06:01.220 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58905 ']' 00:06:01.220 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58905 00:06:01.220 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:01.220 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.220 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58905 00:06:01.220 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.220 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.220 killing process with pid 58905 00:06:01.220 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58905' 00:06:01.220 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58905 00:06:01.220 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58905 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58905 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58905 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58905 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58905 ']' 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58905) - No such process 00:06:01.479 ERROR: process (pid: 58905) is no longer running 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.479 00:06:01.479 real 0m1.428s 00:06:01.479 user 0m1.381s 00:06:01.479 sys 0m0.542s 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.479 ************************************ 00:06:01.479 15:53:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 END TEST default_locks 00:06:01.479 ************************************ 00:06:01.479 15:53:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:01.479 15:53:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.479 15:53:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.479 15:53:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 ************************************ 00:06:01.479 START TEST default_locks_via_rpc 00:06:01.479 ************************************ 00:06:01.479 15:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:01.479 15:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58950 00:06:01.479 15:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58950 00:06:01.479 15:53:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.479 15:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58950 ']' 00:06:01.479 15:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.479 15:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.479 15:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.479 15:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.479 15:53:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.737 [2024-11-20 15:53:59.746857] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:01.737 [2024-11-20 15:53:59.746985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58950 ] 00:06:01.737 [2024-11-20 15:53:59.896010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.737 [2024-11-20 15:53:59.961560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.994 [2024-11-20 15:54:00.036339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.994 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.994 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.994 15:54:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:01.994 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.994 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.252 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.252 15:54:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:02.252 15:54:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.252 15:54:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.252 15:54:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.252 15:54:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.252 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.252 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.252 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.252 15:54:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58950 00:06:02.252 15:54:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58950 00:06:02.252 15:54:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.509 15:54:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58950 00:06:02.509 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58950 ']' 00:06:02.509 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58950 00:06:02.509 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:02.509 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.509 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58950 00:06:02.509 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.509 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.509 killing process with pid 58950 00:06:02.509 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58950' 00:06:02.509 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58950 00:06:02.509 15:54:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58950 00:06:03.075 00:06:03.075 real 0m1.482s 00:06:03.075 user 0m1.457s 00:06:03.075 sys 0m0.583s 00:06:03.075 15:54:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.075 15:54:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.075 ************************************ 00:06:03.075 END TEST default_locks_via_rpc 00:06:03.075 ************************************ 00:06:03.075 15:54:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:03.075 15:54:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.075 15:54:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.075 15:54:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.075 ************************************ 00:06:03.075 START TEST non_locking_app_on_locked_coremask 00:06:03.075 ************************************ 00:06:03.075 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:03.075 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58993 00:06:03.075 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58993 /var/tmp/spdk.sock 00:06:03.075 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.075 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58993 ']' 00:06:03.075 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.075 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.075 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.075 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.075 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.075 [2024-11-20 15:54:01.266615] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:03.075 [2024-11-20 15:54:01.266719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58993 ] 00:06:03.333 [2024-11-20 15:54:01.407626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.333 [2024-11-20 15:54:01.471904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.333 [2024-11-20 15:54:01.545470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.592 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.592 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.592 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59002 00:06:03.592 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:03.592 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59002 /var/tmp/spdk2.sock 00:06:03.592 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59002 ']' 00:06:03.592 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.592 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.592 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.592 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.592 15:54:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.592 [2024-11-20 15:54:01.837330] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:03.592 [2024-11-20 15:54:01.837425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59002 ] 00:06:03.851 [2024-11-20 15:54:01.999053] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.851 [2024-11-20 15:54:01.999112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.110 [2024-11-20 15:54:02.131655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.110 [2024-11-20 15:54:02.286781] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.046 15:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.046 15:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:05.046 15:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58993 00:06:05.046 15:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58993 00:06:05.046 15:54:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.663 15:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58993 00:06:05.663 15:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58993 ']' 00:06:05.663 15:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58993 00:06:05.663 15:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.663 15:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.663 15:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58993 00:06:05.663 15:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.663 15:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.663 killing process with pid 58993 00:06:05.663 15:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58993' 00:06:05.663 15:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58993 00:06:05.663 15:54:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58993 00:06:06.622 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59002 00:06:06.622 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59002 ']' 00:06:06.622 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59002 00:06:06.622 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.622 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.622 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59002 00:06:06.622 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.622 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.622 killing process with pid 59002 00:06:06.622 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59002' 00:06:06.622 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59002 00:06:06.622 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59002 00:06:06.883 00:06:06.883 real 0m3.730s 00:06:06.883 user 0m4.113s 00:06:06.883 sys 0m1.111s 00:06:06.883 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.883 15:54:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.883 ************************************ 00:06:06.883 END TEST non_locking_app_on_locked_coremask 00:06:06.883 ************************************ 00:06:06.883 15:54:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:06.883 15:54:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.883 15:54:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.883 15:54:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.883 ************************************ 00:06:06.883 START TEST locking_app_on_unlocked_coremask 00:06:06.883 ************************************ 00:06:06.883 15:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:06.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.883 15:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59069 00:06:06.883 15:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59069 /var/tmp/spdk.sock 00:06:06.883 15:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59069 ']' 00:06:06.883 15:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:06.883 15:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.883 15:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.883 15:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.883 15:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.883 15:54:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.883 [2024-11-20 15:54:05.057703] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:06.883 [2024-11-20 15:54:05.057835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59069 ] 00:06:07.141 [2024-11-20 15:54:05.205347] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.141 [2024-11-20 15:54:05.205415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.141 [2024-11-20 15:54:05.267354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.141 [2024-11-20 15:54:05.345188] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.401 15:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.401 15:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.401 15:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59077 00:06:07.401 15:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59077 /var/tmp/spdk2.sock 00:06:07.401 15:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:07.401 15:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59077 ']' 00:06:07.401 15:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.401 15:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.401 15:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.401 15:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.401 15:54:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.401 [2024-11-20 15:54:05.634447] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:07.401 [2024-11-20 15:54:05.634557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59077 ] 00:06:07.660 [2024-11-20 15:54:05.799414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.918 [2024-11-20 15:54:05.925454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.918 [2024-11-20 15:54:06.094740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.485 15:54:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.485 15:54:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.485 15:54:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59077 00:06:08.485 15:54:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59077 00:06:08.485 15:54:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.419 15:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59069 00:06:09.419 15:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59069 ']' 00:06:09.419 15:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59069 00:06:09.419 15:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.419 15:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.419 15:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59069 00:06:09.419 killing process with pid 59069 00:06:09.419 15:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.419 15:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.419 15:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59069' 00:06:09.419 15:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59069 00:06:09.419 15:54:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59069 00:06:10.350 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59077 00:06:10.350 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59077 ']' 00:06:10.351 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59077 00:06:10.351 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.351 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.351 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59077 00:06:10.351 killing process with pid 59077 00:06:10.351 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.351 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.351 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59077' 00:06:10.351 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59077 00:06:10.351 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59077 00:06:10.627 00:06:10.627 real 0m3.697s 00:06:10.627 user 0m3.989s 00:06:10.627 sys 0m1.165s 00:06:10.627 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.627 15:54:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.627 ************************************ 00:06:10.627 END TEST locking_app_on_unlocked_coremask 00:06:10.627 ************************************ 00:06:10.627 15:54:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:10.627 15:54:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.627 15:54:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.627 15:54:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.627 ************************************ 00:06:10.627 START TEST locking_app_on_locked_coremask 00:06:10.627 ************************************ 00:06:10.627 15:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:10.627 15:54:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59144 00:06:10.627 15:54:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.627 15:54:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59144 /var/tmp/spdk.sock 00:06:10.627 15:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59144 ']' 00:06:10.627 15:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.627 15:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.628 15:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.628 15:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.628 15:54:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.628 [2024-11-20 15:54:08.816205] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:10.628 [2024-11-20 15:54:08.816356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59144 ] 00:06:10.894 [2024-11-20 15:54:08.967999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.894 [2024-11-20 15:54:09.033790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.894 [2024-11-20 15:54:09.112054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59160 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59160 /var/tmp/spdk2.sock 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59160 /var/tmp/spdk2.sock 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59160 /var/tmp/spdk2.sock 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59160 ']' 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.830 15:54:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.830 [2024-11-20 15:54:09.861705] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:11.830 [2024-11-20 15:54:09.861801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59160 ] 00:06:11.830 [2024-11-20 15:54:10.024258] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59144 has claimed it. 00:06:11.830 [2024-11-20 15:54:10.024554] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.397 ERROR: process (pid: 59160) is no longer running 00:06:12.397 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59160) - No such process 00:06:12.397 15:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.397 15:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:12.397 15:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:12.397 15:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.397 15:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.397 15:54:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.397 15:54:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59144 00:06:12.397 15:54:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59144 00:06:12.397 15:54:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.962 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59144 00:06:12.962 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59144 ']' 00:06:12.962 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59144 00:06:12.962 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:12.962 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.962 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59144 00:06:12.962 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.962 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.962 killing process with pid 59144 00:06:12.962 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59144' 00:06:12.962 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59144 00:06:12.962 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59144 00:06:13.527 00:06:13.527 real 0m2.740s 00:06:13.527 user 0m3.177s 00:06:13.527 sys 0m0.717s 00:06:13.527 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.527 15:54:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.527 ************************************ 00:06:13.527 END TEST locking_app_on_locked_coremask 00:06:13.527 ************************************ 00:06:13.527 15:54:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:13.527 15:54:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.527 15:54:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.527 15:54:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.527 ************************************ 00:06:13.527 START TEST locking_overlapped_coremask 00:06:13.527 ************************************ 00:06:13.527 15:54:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:13.527 15:54:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59206 00:06:13.527 15:54:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:13.527 15:54:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59206 /var/tmp/spdk.sock 00:06:13.527 15:54:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59206 ']' 00:06:13.527 15:54:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.527 15:54:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.528 15:54:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.528 15:54:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.528 15:54:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.528 [2024-11-20 15:54:11.592309] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:13.528 [2024-11-20 15:54:11.592429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59206 ] 00:06:13.528 [2024-11-20 15:54:11.736793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.786 [2024-11-20 15:54:11.802973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.786 [2024-11-20 15:54:11.803088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.786 [2024-11-20 15:54:11.803095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.786 [2024-11-20 15:54:11.880417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59228 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59228 /var/tmp/spdk2.sock 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59228 /var/tmp/spdk2.sock 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59228 /var/tmp/spdk2.sock 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59228 ']' 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.354 15:54:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.611 [2024-11-20 15:54:12.649333] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:14.611 [2024-11-20 15:54:12.649418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59228 ] 00:06:14.611 [2024-11-20 15:54:12.811534] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59206 has claimed it. 00:06:14.611 [2024-11-20 15:54:12.811595] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.176 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59228) - No such process 00:06:15.176 ERROR: process (pid: 59228) is no longer running 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59206 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59206 ']' 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59206 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59206 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.176 killing process with pid 59206 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59206' 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59206 00:06:15.176 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59206 00:06:15.744 00:06:15.744 real 0m2.288s 00:06:15.744 user 0m6.478s 00:06:15.744 sys 0m0.434s 00:06:15.744 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.744 ************************************ 00:06:15.744 END TEST locking_overlapped_coremask 00:06:15.744 ************************************ 00:06:15.744 15:54:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.744 15:54:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:15.744 15:54:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.744 15:54:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.744 15:54:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.744 ************************************ 00:06:15.744 START TEST locking_overlapped_coremask_via_rpc 00:06:15.744 ************************************ 00:06:15.744 15:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:15.744 15:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59269 00:06:15.744 15:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59269 /var/tmp/spdk.sock 00:06:15.744 15:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:15.744 15:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59269 ']' 00:06:15.744 15:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.744 15:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.744 15:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.744 15:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.744 15:54:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.744 [2024-11-20 15:54:13.924111] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:15.744 [2024-11-20 15:54:13.924222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59269 ] 00:06:16.002 [2024-11-20 15:54:14.073047] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.002 [2024-11-20 15:54:14.073109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.002 [2024-11-20 15:54:14.132444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.002 [2024-11-20 15:54:14.132541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.002 [2024-11-20 15:54:14.132549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.002 [2024-11-20 15:54:14.204449] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:16.937 15:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.937 15:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.937 15:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59287 00:06:16.937 15:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59287 /var/tmp/spdk2.sock 00:06:16.937 15:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:16.937 15:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59287 ']' 00:06:16.937 15:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.937 15:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.937 15:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.937 15:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.937 15:54:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.937 [2024-11-20 15:54:14.959197] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:16.937 [2024-11-20 15:54:14.959311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59287 ] 00:06:16.937 [2024-11-20 15:54:15.124681] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.937 [2024-11-20 15:54:15.124731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.195 [2024-11-20 15:54:15.252206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.195 [2024-11-20 15:54:15.255907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.195 [2024-11-20 15:54:15.255907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:17.195 [2024-11-20 15:54:15.391907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.763 [2024-11-20 15:54:15.971926] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59269 has claimed it. 00:06:17.763 request: 00:06:17.763 { 00:06:17.763 "method": "framework_enable_cpumask_locks", 00:06:17.763 "req_id": 1 00:06:17.763 } 00:06:17.763 Got JSON-RPC error response 00:06:17.763 response: 00:06:17.763 { 00:06:17.763 "code": -32603, 00:06:17.763 "message": "Failed to claim CPU core: 2" 00:06:17.763 } 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59269 /var/tmp/spdk.sock 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59269 ']' 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.763 15:54:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59287 /var/tmp/spdk2.sock 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59287 ']' 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.332 ************************************ 00:06:18.332 END TEST locking_overlapped_coremask_via_rpc 00:06:18.332 ************************************ 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.332 00:06:18.332 real 0m2.679s 00:06:18.332 user 0m1.424s 00:06:18.332 sys 0m0.186s 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.332 15:54:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.332 15:54:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:18.332 15:54:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59269 ]] 00:06:18.332 15:54:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59269 00:06:18.332 15:54:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59269 ']' 00:06:18.332 15:54:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59269 00:06:18.332 15:54:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:18.332 15:54:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.332 15:54:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59269 00:06:18.590 killing process with pid 59269 00:06:18.590 15:54:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.590 15:54:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.590 15:54:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59269' 00:06:18.590 15:54:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59269 00:06:18.590 15:54:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59269 00:06:18.848 15:54:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59287 ]] 00:06:18.848 15:54:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59287 00:06:18.848 15:54:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59287 ']' 00:06:18.848 15:54:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59287 00:06:18.848 15:54:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:18.848 15:54:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.848 15:54:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59287 00:06:18.848 killing process with pid 59287 00:06:18.848 15:54:17 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:18.848 15:54:17 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:18.848 15:54:17 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59287' 00:06:18.848 15:54:17 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59287 00:06:18.848 15:54:17 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59287 00:06:19.414 15:54:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:19.415 15:54:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:19.415 15:54:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59269 ]] 00:06:19.415 15:54:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59269 00:06:19.415 15:54:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59269 ']' 00:06:19.415 15:54:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59269 00:06:19.415 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59269) - No such process 00:06:19.415 Process with pid 59269 is not found 00:06:19.415 15:54:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59269 is not found' 00:06:19.415 15:54:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59287 ]] 00:06:19.415 15:54:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59287 00:06:19.415 15:54:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59287 ']' 00:06:19.415 15:54:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59287 00:06:19.415 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59287) - No such process 00:06:19.415 Process with pid 59287 is not found 00:06:19.415 15:54:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59287 is not found' 00:06:19.415 15:54:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:19.415 00:06:19.415 real 0m19.401s 00:06:19.415 user 0m34.933s 00:06:19.415 sys 0m5.611s 00:06:19.415 15:54:17 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.415 ************************************ 00:06:19.415 END TEST cpu_locks 00:06:19.415 ************************************ 00:06:19.415 15:54:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.415 00:06:19.415 real 0m48.059s 00:06:19.415 user 1m35.181s 00:06:19.415 sys 0m9.458s 00:06:19.415 15:54:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.415 15:54:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.415 ************************************ 00:06:19.415 END TEST event 00:06:19.415 ************************************ 00:06:19.415 15:54:17 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:19.415 15:54:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.415 15:54:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.415 15:54:17 -- common/autotest_common.sh@10 -- # set +x 00:06:19.415 ************************************ 00:06:19.415 START TEST thread 00:06:19.415 ************************************ 00:06:19.415 15:54:17 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:19.415 * Looking for test storage... 00:06:19.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:19.415 15:54:17 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:19.415 15:54:17 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:19.415 15:54:17 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.415 15:54:17 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.415 15:54:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.415 15:54:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.415 15:54:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.415 15:54:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.415 15:54:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.415 15:54:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.415 15:54:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.415 15:54:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.415 15:54:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.415 15:54:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.415 15:54:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.415 15:54:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:19.415 15:54:17 thread -- scripts/common.sh@345 -- # : 1 00:06:19.415 15:54:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.415 15:54:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.415 15:54:17 thread -- scripts/common.sh@365 -- # decimal 1 00:06:19.673 15:54:17 thread -- scripts/common.sh@353 -- # local d=1 00:06:19.673 15:54:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.673 15:54:17 thread -- scripts/common.sh@355 -- # echo 1 00:06:19.673 15:54:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.673 15:54:17 thread -- scripts/common.sh@366 -- # decimal 2 00:06:19.673 15:54:17 thread -- scripts/common.sh@353 -- # local d=2 00:06:19.673 15:54:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.674 15:54:17 thread -- scripts/common.sh@355 -- # echo 2 00:06:19.674 15:54:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.674 15:54:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.674 15:54:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.674 15:54:17 thread -- scripts/common.sh@368 -- # return 0 00:06:19.674 15:54:17 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.674 15:54:17 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.674 --rc genhtml_branch_coverage=1 00:06:19.674 --rc genhtml_function_coverage=1 00:06:19.674 --rc genhtml_legend=1 00:06:19.674 --rc geninfo_all_blocks=1 00:06:19.674 --rc geninfo_unexecuted_blocks=1 00:06:19.674 00:06:19.674 ' 00:06:19.674 15:54:17 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.674 --rc genhtml_branch_coverage=1 00:06:19.674 --rc genhtml_function_coverage=1 00:06:19.674 --rc genhtml_legend=1 00:06:19.674 --rc geninfo_all_blocks=1 00:06:19.674 --rc geninfo_unexecuted_blocks=1 00:06:19.674 00:06:19.674 ' 00:06:19.674 15:54:17 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.674 --rc genhtml_branch_coverage=1 00:06:19.674 --rc genhtml_function_coverage=1 00:06:19.674 --rc genhtml_legend=1 00:06:19.674 --rc geninfo_all_blocks=1 00:06:19.674 --rc geninfo_unexecuted_blocks=1 00:06:19.674 00:06:19.674 ' 00:06:19.674 15:54:17 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.674 --rc genhtml_branch_coverage=1 00:06:19.674 --rc genhtml_function_coverage=1 00:06:19.674 --rc genhtml_legend=1 00:06:19.674 --rc geninfo_all_blocks=1 00:06:19.674 --rc geninfo_unexecuted_blocks=1 00:06:19.674 00:06:19.674 ' 00:06:19.674 15:54:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.674 15:54:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:19.674 15:54:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.674 15:54:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.674 ************************************ 00:06:19.674 START TEST thread_poller_perf 00:06:19.674 ************************************ 00:06:19.674 15:54:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:19.674 [2024-11-20 15:54:17.701992] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:19.674 [2024-11-20 15:54:17.702715] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59418 ] 00:06:19.674 [2024-11-20 15:54:17.846449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.674 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:19.674 [2024-11-20 15:54:17.904935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.121 [2024-11-20T15:54:19.371Z] ====================================== 00:06:21.121 [2024-11-20T15:54:19.371Z] busy:2206624423 (cyc) 00:06:21.121 [2024-11-20T15:54:19.371Z] total_run_count: 314000 00:06:21.121 [2024-11-20T15:54:19.371Z] tsc_hz: 2200000000 (cyc) 00:06:21.121 [2024-11-20T15:54:19.371Z] ====================================== 00:06:21.121 [2024-11-20T15:54:19.371Z] poller_cost: 7027 (cyc), 3194 (nsec) 00:06:21.121 00:06:21.121 real 0m1.293s 00:06:21.121 user 0m1.140s 00:06:21.121 sys 0m0.046s 00:06:21.121 15:54:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.122 ************************************ 00:06:21.122 END TEST thread_poller_perf 00:06:21.122 ************************************ 00:06:21.122 15:54:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 15:54:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.122 15:54:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:21.122 15:54:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.122 15:54:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.122 ************************************ 00:06:21.122 START TEST thread_poller_perf 00:06:21.122 ************************************ 00:06:21.122 15:54:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:21.122 [2024-11-20 15:54:19.043920] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:21.122 [2024-11-20 15:54:19.044018] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59453 ] 00:06:21.122 [2024-11-20 15:54:19.191831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.122 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:21.122 [2024-11-20 15:54:19.247098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.054 [2024-11-20T15:54:20.304Z] ====================================== 00:06:22.054 [2024-11-20T15:54:20.304Z] busy:2202077704 (cyc) 00:06:22.054 [2024-11-20T15:54:20.304Z] total_run_count: 4161000 00:06:22.054 [2024-11-20T15:54:20.304Z] tsc_hz: 2200000000 (cyc) 00:06:22.054 [2024-11-20T15:54:20.304Z] ====================================== 00:06:22.054 [2024-11-20T15:54:20.304Z] poller_cost: 529 (cyc), 240 (nsec) 00:06:22.054 00:06:22.054 real 0m1.273s 00:06:22.054 user 0m1.120s 00:06:22.054 sys 0m0.047s 00:06:22.054 15:54:20 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.054 ************************************ 00:06:22.054 END TEST thread_poller_perf 00:06:22.054 ************************************ 00:06:22.054 15:54:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:22.312 15:54:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:22.312 00:06:22.312 real 0m2.841s 00:06:22.312 user 0m2.407s 00:06:22.312 sys 0m0.221s 00:06:22.312 15:54:20 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.312 15:54:20 thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.312 ************************************ 00:06:22.312 END TEST thread 00:06:22.312 ************************************ 00:06:22.312 15:54:20 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:22.312 15:54:20 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:22.312 15:54:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.312 15:54:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.312 15:54:20 -- common/autotest_common.sh@10 -- # set +x 00:06:22.312 ************************************ 00:06:22.312 START TEST app_cmdline 00:06:22.312 ************************************ 00:06:22.312 15:54:20 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:22.312 * Looking for test storage... 00:06:22.312 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:22.312 15:54:20 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:22.312 15:54:20 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:22.312 15:54:20 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:22.312 15:54:20 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.312 15:54:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.568 15:54:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:22.568 15:54:20 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.568 15:54:20 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:22.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.568 --rc genhtml_branch_coverage=1 00:06:22.568 --rc genhtml_function_coverage=1 00:06:22.568 --rc genhtml_legend=1 00:06:22.568 --rc geninfo_all_blocks=1 00:06:22.568 --rc geninfo_unexecuted_blocks=1 00:06:22.568 00:06:22.568 ' 00:06:22.568 15:54:20 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:22.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.568 --rc genhtml_branch_coverage=1 00:06:22.568 --rc genhtml_function_coverage=1 00:06:22.569 --rc genhtml_legend=1 00:06:22.569 --rc geninfo_all_blocks=1 00:06:22.569 --rc geninfo_unexecuted_blocks=1 00:06:22.569 00:06:22.569 ' 00:06:22.569 15:54:20 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:22.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.569 --rc genhtml_branch_coverage=1 00:06:22.569 --rc genhtml_function_coverage=1 00:06:22.569 --rc genhtml_legend=1 00:06:22.569 --rc geninfo_all_blocks=1 00:06:22.569 --rc geninfo_unexecuted_blocks=1 00:06:22.569 00:06:22.569 ' 00:06:22.569 15:54:20 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:22.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.569 --rc genhtml_branch_coverage=1 00:06:22.569 --rc genhtml_function_coverage=1 00:06:22.569 --rc genhtml_legend=1 00:06:22.569 --rc geninfo_all_blocks=1 00:06:22.569 --rc geninfo_unexecuted_blocks=1 00:06:22.569 00:06:22.569 ' 00:06:22.569 15:54:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:22.569 15:54:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59536 00:06:22.569 15:54:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59536 00:06:22.569 15:54:20 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:22.569 15:54:20 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59536 ']' 00:06:22.569 15:54:20 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.569 15:54:20 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.569 15:54:20 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.569 15:54:20 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.569 15:54:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.569 [2024-11-20 15:54:20.636513] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:22.569 [2024-11-20 15:54:20.636620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59536 ] 00:06:22.569 [2024-11-20 15:54:20.784486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.825 [2024-11-20 15:54:20.840123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.825 [2024-11-20 15:54:20.913595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:23.084 15:54:21 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.084 15:54:21 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:23.084 15:54:21 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:23.341 { 00:06:23.341 "version": "SPDK v25.01-pre git sha1 0728de5b0", 00:06:23.341 "fields": { 00:06:23.341 "major": 25, 00:06:23.341 "minor": 1, 00:06:23.341 "patch": 0, 00:06:23.341 "suffix": "-pre", 00:06:23.341 "commit": "0728de5b0" 00:06:23.341 } 00:06:23.341 } 00:06:23.341 15:54:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:23.341 15:54:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:23.341 15:54:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:23.341 15:54:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:23.341 15:54:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:23.341 15:54:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.341 15:54:21 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.341 15:54:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:23.341 15:54:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:23.341 15:54:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:23.341 15:54:21 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:23.599 request: 00:06:23.599 { 00:06:23.599 "method": "env_dpdk_get_mem_stats", 00:06:23.599 "req_id": 1 00:06:23.599 } 00:06:23.599 Got JSON-RPC error response 00:06:23.599 response: 00:06:23.599 { 00:06:23.599 "code": -32601, 00:06:23.599 "message": "Method not found" 00:06:23.599 } 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:23.599 15:54:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59536 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59536 ']' 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59536 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59536 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.599 killing process with pid 59536 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59536' 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@973 -- # kill 59536 00:06:23.599 15:54:21 app_cmdline -- common/autotest_common.sh@978 -- # wait 59536 00:06:24.166 00:06:24.166 real 0m1.843s 00:06:24.166 user 0m2.258s 00:06:24.166 sys 0m0.474s 00:06:24.166 15:54:22 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.166 15:54:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.166 ************************************ 00:06:24.166 END TEST app_cmdline 00:06:24.166 ************************************ 00:06:24.166 15:54:22 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:24.166 15:54:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.166 15:54:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.166 15:54:22 -- common/autotest_common.sh@10 -- # set +x 00:06:24.166 ************************************ 00:06:24.166 START TEST version 00:06:24.166 ************************************ 00:06:24.166 15:54:22 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:24.166 * Looking for test storage... 00:06:24.166 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:24.166 15:54:22 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.166 15:54:22 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.166 15:54:22 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.424 15:54:22 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.424 15:54:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.424 15:54:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.424 15:54:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.424 15:54:22 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.424 15:54:22 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.424 15:54:22 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.424 15:54:22 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.424 15:54:22 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.424 15:54:22 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.424 15:54:22 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.424 15:54:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.424 15:54:22 version -- scripts/common.sh@344 -- # case "$op" in 00:06:24.424 15:54:22 version -- scripts/common.sh@345 -- # : 1 00:06:24.424 15:54:22 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.424 15:54:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.424 15:54:22 version -- scripts/common.sh@365 -- # decimal 1 00:06:24.424 15:54:22 version -- scripts/common.sh@353 -- # local d=1 00:06:24.424 15:54:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.424 15:54:22 version -- scripts/common.sh@355 -- # echo 1 00:06:24.424 15:54:22 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.424 15:54:22 version -- scripts/common.sh@366 -- # decimal 2 00:06:24.424 15:54:22 version -- scripts/common.sh@353 -- # local d=2 00:06:24.424 15:54:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.424 15:54:22 version -- scripts/common.sh@355 -- # echo 2 00:06:24.424 15:54:22 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.424 15:54:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.424 15:54:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.424 15:54:22 version -- scripts/common.sh@368 -- # return 0 00:06:24.424 15:54:22 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.424 15:54:22 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.424 --rc genhtml_branch_coverage=1 00:06:24.424 --rc genhtml_function_coverage=1 00:06:24.424 --rc genhtml_legend=1 00:06:24.424 --rc geninfo_all_blocks=1 00:06:24.424 --rc geninfo_unexecuted_blocks=1 00:06:24.424 00:06:24.424 ' 00:06:24.424 15:54:22 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.424 --rc genhtml_branch_coverage=1 00:06:24.424 --rc genhtml_function_coverage=1 00:06:24.424 --rc genhtml_legend=1 00:06:24.424 --rc geninfo_all_blocks=1 00:06:24.424 --rc geninfo_unexecuted_blocks=1 00:06:24.424 00:06:24.424 ' 00:06:24.424 15:54:22 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.424 --rc genhtml_branch_coverage=1 00:06:24.424 --rc genhtml_function_coverage=1 00:06:24.424 --rc genhtml_legend=1 00:06:24.424 --rc geninfo_all_blocks=1 00:06:24.424 --rc geninfo_unexecuted_blocks=1 00:06:24.424 00:06:24.424 ' 00:06:24.424 15:54:22 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.424 --rc genhtml_branch_coverage=1 00:06:24.424 --rc genhtml_function_coverage=1 00:06:24.424 --rc genhtml_legend=1 00:06:24.424 --rc geninfo_all_blocks=1 00:06:24.424 --rc geninfo_unexecuted_blocks=1 00:06:24.424 00:06:24.424 ' 00:06:24.424 15:54:22 version -- app/version.sh@17 -- # get_header_version major 00:06:24.424 15:54:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.424 15:54:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.424 15:54:22 version -- app/version.sh@14 -- # cut -f2 00:06:24.424 15:54:22 version -- app/version.sh@17 -- # major=25 00:06:24.424 15:54:22 version -- app/version.sh@18 -- # get_header_version minor 00:06:24.424 15:54:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.424 15:54:22 version -- app/version.sh@14 -- # cut -f2 00:06:24.424 15:54:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.424 15:54:22 version -- app/version.sh@18 -- # minor=1 00:06:24.424 15:54:22 version -- app/version.sh@19 -- # get_header_version patch 00:06:24.424 15:54:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.424 15:54:22 version -- app/version.sh@14 -- # cut -f2 00:06:24.424 15:54:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.424 15:54:22 version -- app/version.sh@19 -- # patch=0 00:06:24.424 15:54:22 version -- app/version.sh@20 -- # get_header_version suffix 00:06:24.424 15:54:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.424 15:54:22 version -- app/version.sh@14 -- # cut -f2 00:06:24.424 15:54:22 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.424 15:54:22 version -- app/version.sh@20 -- # suffix=-pre 00:06:24.424 15:54:22 version -- app/version.sh@22 -- # version=25.1 00:06:24.424 15:54:22 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:24.424 15:54:22 version -- app/version.sh@28 -- # version=25.1rc0 00:06:24.424 15:54:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:24.424 15:54:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:24.424 15:54:22 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:24.424 15:54:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:24.424 00:06:24.424 real 0m0.253s 00:06:24.424 user 0m0.160s 00:06:24.424 sys 0m0.126s 00:06:24.424 15:54:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.424 ************************************ 00:06:24.424 15:54:22 version -- common/autotest_common.sh@10 -- # set +x 00:06:24.424 END TEST version 00:06:24.424 ************************************ 00:06:24.424 15:54:22 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:24.424 15:54:22 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:24.424 15:54:22 -- spdk/autotest.sh@194 -- # uname -s 00:06:24.424 15:54:22 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:24.424 15:54:22 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:24.424 15:54:22 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:06:24.424 15:54:22 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:06:24.424 15:54:22 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:24.424 15:54:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.424 15:54:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.424 15:54:22 -- common/autotest_common.sh@10 -- # set +x 00:06:24.424 ************************************ 00:06:24.424 START TEST spdk_dd 00:06:24.424 ************************************ 00:06:24.424 15:54:22 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:24.424 * Looking for test storage... 00:06:24.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:24.424 15:54:22 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.424 15:54:22 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.425 15:54:22 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.682 15:54:22 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@345 -- # : 1 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.682 15:54:22 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:06:24.683 15:54:22 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:06:24.683 15:54:22 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.683 15:54:22 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:06:24.683 15:54:22 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.683 15:54:22 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.683 15:54:22 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.683 15:54:22 spdk_dd -- scripts/common.sh@368 -- # return 0 00:06:24.683 15:54:22 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.683 15:54:22 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.683 --rc genhtml_branch_coverage=1 00:06:24.683 --rc genhtml_function_coverage=1 00:06:24.683 --rc genhtml_legend=1 00:06:24.683 --rc geninfo_all_blocks=1 00:06:24.683 --rc geninfo_unexecuted_blocks=1 00:06:24.683 00:06:24.683 ' 00:06:24.683 15:54:22 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.683 --rc genhtml_branch_coverage=1 00:06:24.683 --rc genhtml_function_coverage=1 00:06:24.683 --rc genhtml_legend=1 00:06:24.683 --rc geninfo_all_blocks=1 00:06:24.683 --rc geninfo_unexecuted_blocks=1 00:06:24.683 00:06:24.683 ' 00:06:24.683 15:54:22 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.683 --rc genhtml_branch_coverage=1 00:06:24.683 --rc genhtml_function_coverage=1 00:06:24.683 --rc genhtml_legend=1 00:06:24.683 --rc geninfo_all_blocks=1 00:06:24.683 --rc geninfo_unexecuted_blocks=1 00:06:24.683 00:06:24.683 ' 00:06:24.683 15:54:22 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.683 --rc genhtml_branch_coverage=1 00:06:24.683 --rc genhtml_function_coverage=1 00:06:24.683 --rc genhtml_legend=1 00:06:24.683 --rc geninfo_all_blocks=1 00:06:24.683 --rc geninfo_unexecuted_blocks=1 00:06:24.683 00:06:24.683 ' 00:06:24.683 15:54:22 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:24.683 15:54:22 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.683 15:54:22 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.683 15:54:22 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.683 15:54:22 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.683 15:54:22 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.683 15:54:22 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.683 15:54:22 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.683 15:54:22 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:24.683 15:54:22 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.683 15:54:22 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:24.940 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:24.940 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:24.940 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:24.940 15:54:23 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:24.940 15:54:23 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@233 -- # local class 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@235 -- # local progif 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@236 -- # class=01 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@18 -- # local i 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@27 -- # return 0 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:06:24.940 15:54:23 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:24.940 15:54:23 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:24.940 15:54:23 spdk_dd -- dd/common.sh@139 -- # local lib 00:06:24.940 15:54:23 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:24.940 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:24.940 15:54:23 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:24.940 15:54:23 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.202 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:25.203 * spdk_dd linked to liburing 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:06:25.203 15:54:23 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:06:25.203 15:54:23 spdk_dd -- dd/common.sh@153 -- # return 0 00:06:25.203 15:54:23 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:25.203 15:54:23 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:25.203 15:54:23 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:25.203 15:54:23 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.203 15:54:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:25.203 ************************************ 00:06:25.203 START TEST spdk_dd_basic_rw 00:06:25.203 ************************************ 00:06:25.203 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:25.203 * Looking for test storage... 00:06:25.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.204 --rc genhtml_branch_coverage=1 00:06:25.204 --rc genhtml_function_coverage=1 00:06:25.204 --rc genhtml_legend=1 00:06:25.204 --rc geninfo_all_blocks=1 00:06:25.204 --rc geninfo_unexecuted_blocks=1 00:06:25.204 00:06:25.204 ' 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.204 --rc genhtml_branch_coverage=1 00:06:25.204 --rc genhtml_function_coverage=1 00:06:25.204 --rc genhtml_legend=1 00:06:25.204 --rc geninfo_all_blocks=1 00:06:25.204 --rc geninfo_unexecuted_blocks=1 00:06:25.204 00:06:25.204 ' 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.204 --rc genhtml_branch_coverage=1 00:06:25.204 --rc genhtml_function_coverage=1 00:06:25.204 --rc genhtml_legend=1 00:06:25.204 --rc geninfo_all_blocks=1 00:06:25.204 --rc geninfo_unexecuted_blocks=1 00:06:25.204 00:06:25.204 ' 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.204 --rc genhtml_branch_coverage=1 00:06:25.204 --rc genhtml_function_coverage=1 00:06:25.204 --rc genhtml_legend=1 00:06:25.204 --rc geninfo_all_blocks=1 00:06:25.204 --rc geninfo_unexecuted_blocks=1 00:06:25.204 00:06:25.204 ' 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:25.204 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:25.471 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:25.471 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.472 ************************************ 00:06:25.472 START TEST dd_bs_lt_native_bs 00:06:25.472 ************************************ 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:25.472 15:54:23 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:25.472 { 00:06:25.472 "subsystems": [ 00:06:25.472 { 00:06:25.472 "subsystem": "bdev", 00:06:25.472 "config": [ 00:06:25.472 { 00:06:25.472 "params": { 00:06:25.472 "trtype": "pcie", 00:06:25.472 "traddr": "0000:00:10.0", 00:06:25.472 "name": "Nvme0" 00:06:25.472 }, 00:06:25.472 "method": "bdev_nvme_attach_controller" 00:06:25.472 }, 00:06:25.472 { 00:06:25.472 "method": "bdev_wait_for_examine" 00:06:25.472 } 00:06:25.472 ] 00:06:25.472 } 00:06:25.472 ] 00:06:25.472 } 00:06:25.472 [2024-11-20 15:54:23.708974] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:25.472 [2024-11-20 15:54:23.709086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:06:25.731 [2024-11-20 15:54:23.861715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.731 [2024-11-20 15:54:23.921684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.989 [2024-11-20 15:54:23.985395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.989 [2024-11-20 15:54:24.098442] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:25.989 [2024-11-20 15:54:24.098513] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.989 [2024-11-20 15:54:24.232075] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.247 00:06:26.247 real 0m0.647s 00:06:26.247 user 0m0.432s 00:06:26.247 sys 0m0.171s 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:26.247 ************************************ 00:06:26.247 END TEST dd_bs_lt_native_bs 00:06:26.247 ************************************ 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.247 ************************************ 00:06:26.247 START TEST dd_rw 00:06:26.247 ************************************ 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:26.247 15:54:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.813 15:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:26.813 15:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:26.813 15:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:26.813 15:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.070 [2024-11-20 15:54:25.066156] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:27.070 [2024-11-20 15:54:25.066316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59912 ] 00:06:27.070 { 00:06:27.070 "subsystems": [ 00:06:27.070 { 00:06:27.070 "subsystem": "bdev", 00:06:27.070 "config": [ 00:06:27.070 { 00:06:27.070 "params": { 00:06:27.070 "trtype": "pcie", 00:06:27.070 "traddr": "0000:00:10.0", 00:06:27.070 "name": "Nvme0" 00:06:27.070 }, 00:06:27.070 "method": "bdev_nvme_attach_controller" 00:06:27.070 }, 00:06:27.070 { 00:06:27.070 "method": "bdev_wait_for_examine" 00:06:27.070 } 00:06:27.070 ] 00:06:27.070 } 00:06:27.070 ] 00:06:27.070 } 00:06:27.070 [2024-11-20 15:54:25.220724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.070 [2024-11-20 15:54:25.281359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.328 [2024-11-20 15:54:25.339200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.328  [2024-11-20T15:54:25.836Z] Copying: 60/60 [kB] (average 29 MBps) 00:06:27.586 00:06:27.586 15:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:27.586 15:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:27.586 15:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:27.586 15:54:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.586 [2024-11-20 15:54:25.690712] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:27.586 [2024-11-20 15:54:25.690797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59931 ] 00:06:27.586 { 00:06:27.586 "subsystems": [ 00:06:27.586 { 00:06:27.586 "subsystem": "bdev", 00:06:27.586 "config": [ 00:06:27.586 { 00:06:27.586 "params": { 00:06:27.586 "trtype": "pcie", 00:06:27.586 "traddr": "0000:00:10.0", 00:06:27.586 "name": "Nvme0" 00:06:27.586 }, 00:06:27.586 "method": "bdev_nvme_attach_controller" 00:06:27.586 }, 00:06:27.586 { 00:06:27.586 "method": "bdev_wait_for_examine" 00:06:27.586 } 00:06:27.586 ] 00:06:27.586 } 00:06:27.586 ] 00:06:27.586 } 00:06:27.586 [2024-11-20 15:54:25.833784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.844 [2024-11-20 15:54:25.884915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.844 [2024-11-20 15:54:25.940874] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.844  [2024-11-20T15:54:26.353Z] Copying: 60/60 [kB] (average 14 MBps) 00:06:28.103 00:06:28.103 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.103 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:28.103 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:28.103 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:28.103 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:28.103 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:28.103 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:28.103 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:28.103 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:28.103 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:28.103 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.103 [2024-11-20 15:54:26.305356] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:28.103 [2024-11-20 15:54:26.305956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59952 ] 00:06:28.103 { 00:06:28.103 "subsystems": [ 00:06:28.103 { 00:06:28.103 "subsystem": "bdev", 00:06:28.103 "config": [ 00:06:28.103 { 00:06:28.103 "params": { 00:06:28.103 "trtype": "pcie", 00:06:28.103 "traddr": "0000:00:10.0", 00:06:28.103 "name": "Nvme0" 00:06:28.103 }, 00:06:28.103 "method": "bdev_nvme_attach_controller" 00:06:28.103 }, 00:06:28.103 { 00:06:28.103 "method": "bdev_wait_for_examine" 00:06:28.103 } 00:06:28.103 ] 00:06:28.103 } 00:06:28.103 ] 00:06:28.103 } 00:06:28.365 [2024-11-20 15:54:26.458071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.365 [2024-11-20 15:54:26.519996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.365 [2024-11-20 15:54:26.579709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.623  [2024-11-20T15:54:27.131Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:28.881 00:06:28.881 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:28.881 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:28.881 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:28.881 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:28.881 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:28.881 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:28.881 15:54:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:29.447 15:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:29.447 15:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:29.447 15:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:29.447 15:54:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:29.447 [2024-11-20 15:54:27.591953] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:29.447 [2024-11-20 15:54:27.592061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59971 ] 00:06:29.447 { 00:06:29.447 "subsystems": [ 00:06:29.447 { 00:06:29.447 "subsystem": "bdev", 00:06:29.447 "config": [ 00:06:29.447 { 00:06:29.447 "params": { 00:06:29.447 "trtype": "pcie", 00:06:29.447 "traddr": "0000:00:10.0", 00:06:29.447 "name": "Nvme0" 00:06:29.447 }, 00:06:29.447 "method": "bdev_nvme_attach_controller" 00:06:29.447 }, 00:06:29.447 { 00:06:29.447 "method": "bdev_wait_for_examine" 00:06:29.447 } 00:06:29.447 ] 00:06:29.447 } 00:06:29.447 ] 00:06:29.447 } 00:06:29.707 [2024-11-20 15:54:27.742327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.707 [2024-11-20 15:54:27.807890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.707 [2024-11-20 15:54:27.867011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.965  [2024-11-20T15:54:28.215Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:29.965 00:06:29.965 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:29.965 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:29.965 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:29.965 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.223 { 00:06:30.223 "subsystems": [ 00:06:30.223 { 00:06:30.223 "subsystem": "bdev", 00:06:30.223 "config": [ 00:06:30.223 { 00:06:30.223 "params": { 00:06:30.223 "trtype": "pcie", 00:06:30.223 "traddr": "0000:00:10.0", 00:06:30.223 "name": "Nvme0" 00:06:30.223 }, 00:06:30.223 "method": "bdev_nvme_attach_controller" 00:06:30.223 }, 00:06:30.223 { 00:06:30.223 "method": "bdev_wait_for_examine" 00:06:30.223 } 00:06:30.223 ] 00:06:30.223 } 00:06:30.223 ] 00:06:30.223 } 00:06:30.223 [2024-11-20 15:54:28.231606] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:30.223 [2024-11-20 15:54:28.231714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59979 ] 00:06:30.223 [2024-11-20 15:54:28.381453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.223 [2024-11-20 15:54:28.450085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.480 [2024-11-20 15:54:28.508801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.480  [2024-11-20T15:54:28.988Z] Copying: 60/60 [kB] (average 58 MBps) 00:06:30.738 00:06:30.738 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:30.738 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:30.738 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:30.738 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:30.738 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:30.738 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:30.738 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:30.738 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:30.738 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:30.738 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:30.738 15:54:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:30.738 [2024-11-20 15:54:28.882542] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:30.738 [2024-11-20 15:54:28.882642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60000 ] 00:06:30.738 { 00:06:30.738 "subsystems": [ 00:06:30.738 { 00:06:30.738 "subsystem": "bdev", 00:06:30.738 "config": [ 00:06:30.738 { 00:06:30.738 "params": { 00:06:30.738 "trtype": "pcie", 00:06:30.738 "traddr": "0000:00:10.0", 00:06:30.738 "name": "Nvme0" 00:06:30.738 }, 00:06:30.738 "method": "bdev_nvme_attach_controller" 00:06:30.738 }, 00:06:30.738 { 00:06:30.738 "method": "bdev_wait_for_examine" 00:06:30.738 } 00:06:30.738 ] 00:06:30.738 } 00:06:30.738 ] 00:06:30.738 } 00:06:30.996 [2024-11-20 15:54:29.034250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.996 [2024-11-20 15:54:29.098396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.996 [2024-11-20 15:54:29.155561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:31.254  [2024-11-20T15:54:29.504Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:31.254 00:06:31.254 15:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:31.254 15:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:31.254 15:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:31.254 15:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:31.254 15:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:31.254 15:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:31.254 15:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:31.254 15:54:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.820 15:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:31.820 15:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:31.820 15:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:31.820 15:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.820 [2024-11-20 15:54:30.062507] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:31.820 [2024-11-20 15:54:30.062612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60019 ] 00:06:31.820 { 00:06:31.820 "subsystems": [ 00:06:31.820 { 00:06:31.820 "subsystem": "bdev", 00:06:31.820 "config": [ 00:06:31.820 { 00:06:31.820 "params": { 00:06:31.820 "trtype": "pcie", 00:06:31.820 "traddr": "0000:00:10.0", 00:06:31.820 "name": "Nvme0" 00:06:31.820 }, 00:06:31.820 "method": "bdev_nvme_attach_controller" 00:06:31.820 }, 00:06:31.820 { 00:06:31.820 "method": "bdev_wait_for_examine" 00:06:31.820 } 00:06:31.820 ] 00:06:31.820 } 00:06:31.820 ] 00:06:31.820 } 00:06:32.078 [2024-11-20 15:54:30.212354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.078 [2024-11-20 15:54:30.264698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.078 [2024-11-20 15:54:30.317917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.336  [2024-11-20T15:54:30.845Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:32.595 00:06:32.595 15:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:32.595 15:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:32.595 15:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:32.595 15:54:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.595 { 00:06:32.595 "subsystems": [ 00:06:32.595 { 00:06:32.595 "subsystem": "bdev", 00:06:32.595 "config": [ 00:06:32.595 { 00:06:32.595 "params": { 00:06:32.595 "trtype": "pcie", 00:06:32.595 "traddr": "0000:00:10.0", 00:06:32.595 "name": "Nvme0" 00:06:32.595 }, 00:06:32.595 "method": "bdev_nvme_attach_controller" 00:06:32.595 }, 00:06:32.595 { 00:06:32.595 "method": "bdev_wait_for_examine" 00:06:32.595 } 00:06:32.595 ] 00:06:32.595 } 00:06:32.595 ] 00:06:32.595 } 00:06:32.595 [2024-11-20 15:54:30.674450] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:32.595 [2024-11-20 15:54:30.674554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60040 ] 00:06:32.595 [2024-11-20 15:54:30.821834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.854 [2024-11-20 15:54:30.873905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.854 [2024-11-20 15:54:30.929458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.854  [2024-11-20T15:54:31.363Z] Copying: 56/56 [kB] (average 27 MBps) 00:06:33.113 00:06:33.113 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:33.113 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:33.113 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:33.113 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:33.113 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:33.113 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:33.113 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:33.113 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:33.113 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:33.113 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.113 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.113 [2024-11-20 15:54:31.298948] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:33.113 [2024-11-20 15:54:31.299047] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60050 ] 00:06:33.113 { 00:06:33.113 "subsystems": [ 00:06:33.113 { 00:06:33.113 "subsystem": "bdev", 00:06:33.113 "config": [ 00:06:33.113 { 00:06:33.113 "params": { 00:06:33.113 "trtype": "pcie", 00:06:33.113 "traddr": "0000:00:10.0", 00:06:33.113 "name": "Nvme0" 00:06:33.113 }, 00:06:33.113 "method": "bdev_nvme_attach_controller" 00:06:33.113 }, 00:06:33.113 { 00:06:33.113 "method": "bdev_wait_for_examine" 00:06:33.113 } 00:06:33.113 ] 00:06:33.113 } 00:06:33.113 ] 00:06:33.113 } 00:06:33.373 [2024-11-20 15:54:31.451864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.373 [2024-11-20 15:54:31.512411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.373 [2024-11-20 15:54:31.570605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.632  [2024-11-20T15:54:31.882Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:33.632 00:06:33.632 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:33.632 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:33.632 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:33.632 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:33.632 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:33.632 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:33.632 15:54:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:34.568 15:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:34.568 15:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:34.568 15:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:34.568 15:54:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:34.568 { 00:06:34.568 "subsystems": [ 00:06:34.568 { 00:06:34.568 "subsystem": "bdev", 00:06:34.568 "config": [ 00:06:34.568 { 00:06:34.568 "params": { 00:06:34.568 "trtype": "pcie", 00:06:34.568 "traddr": "0000:00:10.0", 00:06:34.568 "name": "Nvme0" 00:06:34.568 }, 00:06:34.568 "method": "bdev_nvme_attach_controller" 00:06:34.568 }, 00:06:34.568 { 00:06:34.568 "method": "bdev_wait_for_examine" 00:06:34.568 } 00:06:34.568 ] 00:06:34.568 } 00:06:34.568 ] 00:06:34.568 } 00:06:34.568 [2024-11-20 15:54:32.527369] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:34.568 [2024-11-20 15:54:32.527497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60079 ] 00:06:34.568 [2024-11-20 15:54:32.673685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.568 [2024-11-20 15:54:32.748403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.568 [2024-11-20 15:54:32.808857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:34.856  [2024-11-20T15:54:33.372Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:35.122 00:06:35.122 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:35.122 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:35.122 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.122 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.122 [2024-11-20 15:54:33.171146] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:35.122 [2024-11-20 15:54:33.171284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60088 ] 00:06:35.122 { 00:06:35.122 "subsystems": [ 00:06:35.122 { 00:06:35.122 "subsystem": "bdev", 00:06:35.122 "config": [ 00:06:35.122 { 00:06:35.122 "params": { 00:06:35.122 "trtype": "pcie", 00:06:35.122 "traddr": "0000:00:10.0", 00:06:35.122 "name": "Nvme0" 00:06:35.122 }, 00:06:35.122 "method": "bdev_nvme_attach_controller" 00:06:35.122 }, 00:06:35.122 { 00:06:35.122 "method": "bdev_wait_for_examine" 00:06:35.122 } 00:06:35.122 ] 00:06:35.122 } 00:06:35.122 ] 00:06:35.122 } 00:06:35.122 [2024-11-20 15:54:33.319244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.401 [2024-11-20 15:54:33.380857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.401 [2024-11-20 15:54:33.437169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.401  [2024-11-20T15:54:33.962Z] Copying: 56/56 [kB] (average 54 MBps) 00:06:35.712 00:06:35.712 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.712 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:35.712 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:35.712 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:35.712 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:35.712 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:35.712 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:35.712 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:35.712 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:35.712 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.712 15:54:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.712 { 00:06:35.712 "subsystems": [ 00:06:35.712 { 00:06:35.712 "subsystem": "bdev", 00:06:35.712 "config": [ 00:06:35.712 { 00:06:35.712 "params": { 00:06:35.712 "trtype": "pcie", 00:06:35.712 "traddr": "0000:00:10.0", 00:06:35.712 "name": "Nvme0" 00:06:35.712 }, 00:06:35.712 "method": "bdev_nvme_attach_controller" 00:06:35.712 }, 00:06:35.712 { 00:06:35.712 "method": "bdev_wait_for_examine" 00:06:35.712 } 00:06:35.712 ] 00:06:35.712 } 00:06:35.712 ] 00:06:35.712 } 00:06:35.712 [2024-11-20 15:54:33.810107] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:35.712 [2024-11-20 15:54:33.810208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60109 ] 00:06:35.970 [2024-11-20 15:54:33.960282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.970 [2024-11-20 15:54:34.026297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.970 [2024-11-20 15:54:34.084281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.970  [2024-11-20T15:54:34.479Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:36.229 00:06:36.229 15:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:36.229 15:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:36.229 15:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:36.229 15:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:36.229 15:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:36.229 15:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:36.229 15:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:36.229 15:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.797 15:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:36.797 15:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:36.797 15:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.797 15:54:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.797 { 00:06:36.797 "subsystems": [ 00:06:36.797 { 00:06:36.797 "subsystem": "bdev", 00:06:36.797 "config": [ 00:06:36.797 { 00:06:36.797 "params": { 00:06:36.797 "trtype": "pcie", 00:06:36.797 "traddr": "0000:00:10.0", 00:06:36.797 "name": "Nvme0" 00:06:36.797 }, 00:06:36.797 "method": "bdev_nvme_attach_controller" 00:06:36.797 }, 00:06:36.797 { 00:06:36.797 "method": "bdev_wait_for_examine" 00:06:36.797 } 00:06:36.797 ] 00:06:36.797 } 00:06:36.797 ] 00:06:36.797 } 00:06:36.797 [2024-11-20 15:54:35.009954] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:36.797 [2024-11-20 15:54:35.010066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60128 ] 00:06:37.055 [2024-11-20 15:54:35.164858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.055 [2024-11-20 15:54:35.229144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.055 [2024-11-20 15:54:35.287823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.313  [2024-11-20T15:54:35.821Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:37.571 00:06:37.571 15:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:37.571 15:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:37.571 15:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:37.571 15:54:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.571 { 00:06:37.571 "subsystems": [ 00:06:37.571 { 00:06:37.571 "subsystem": "bdev", 00:06:37.571 "config": [ 00:06:37.571 { 00:06:37.571 "params": { 00:06:37.571 "trtype": "pcie", 00:06:37.571 "traddr": "0000:00:10.0", 00:06:37.571 "name": "Nvme0" 00:06:37.571 }, 00:06:37.571 "method": "bdev_nvme_attach_controller" 00:06:37.571 }, 00:06:37.571 { 00:06:37.571 "method": "bdev_wait_for_examine" 00:06:37.571 } 00:06:37.571 ] 00:06:37.571 } 00:06:37.571 ] 00:06:37.571 } 00:06:37.571 [2024-11-20 15:54:35.660885] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:37.571 [2024-11-20 15:54:35.661004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60146 ] 00:06:37.571 [2024-11-20 15:54:35.810042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.829 [2024-11-20 15:54:35.873241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.829 [2024-11-20 15:54:35.928545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.829  [2024-11-20T15:54:36.338Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:38.088 00:06:38.088 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.088 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:38.088 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:38.088 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:38.088 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:38.088 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:38.088 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:38.088 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:38.088 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:38.088 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.088 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.088 { 00:06:38.088 "subsystems": [ 00:06:38.088 { 00:06:38.088 "subsystem": "bdev", 00:06:38.088 "config": [ 00:06:38.088 { 00:06:38.088 "params": { 00:06:38.088 "trtype": "pcie", 00:06:38.088 "traddr": "0000:00:10.0", 00:06:38.088 "name": "Nvme0" 00:06:38.088 }, 00:06:38.088 "method": "bdev_nvme_attach_controller" 00:06:38.088 }, 00:06:38.088 { 00:06:38.088 "method": "bdev_wait_for_examine" 00:06:38.088 } 00:06:38.088 ] 00:06:38.088 } 00:06:38.088 ] 00:06:38.088 } 00:06:38.088 [2024-11-20 15:54:36.307139] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:38.088 [2024-11-20 15:54:36.307273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60157 ] 00:06:38.346 [2024-11-20 15:54:36.457113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.346 [2024-11-20 15:54:36.518460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.346 [2024-11-20 15:54:36.574190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.604  [2024-11-20T15:54:37.113Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:38.863 00:06:38.863 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:38.863 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:38.863 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:38.863 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:38.863 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:38.863 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:38.863 15:54:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.431 15:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:39.431 15:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:39.431 15:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.431 15:54:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.431 { 00:06:39.431 "subsystems": [ 00:06:39.431 { 00:06:39.431 "subsystem": "bdev", 00:06:39.431 "config": [ 00:06:39.431 { 00:06:39.431 "params": { 00:06:39.431 "trtype": "pcie", 00:06:39.431 "traddr": "0000:00:10.0", 00:06:39.431 "name": "Nvme0" 00:06:39.431 }, 00:06:39.431 "method": "bdev_nvme_attach_controller" 00:06:39.431 }, 00:06:39.431 { 00:06:39.431 "method": "bdev_wait_for_examine" 00:06:39.431 } 00:06:39.431 ] 00:06:39.431 } 00:06:39.431 ] 00:06:39.431 } 00:06:39.431 [2024-11-20 15:54:37.487100] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:39.431 [2024-11-20 15:54:37.487219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60176 ] 00:06:39.431 [2024-11-20 15:54:37.635667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.688 [2024-11-20 15:54:37.705814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.688 [2024-11-20 15:54:37.765060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.688  [2024-11-20T15:54:38.197Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:39.947 00:06:39.947 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:39.947 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:39.947 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.947 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.947 { 00:06:39.947 "subsystems": [ 00:06:39.947 { 00:06:39.947 "subsystem": "bdev", 00:06:39.947 "config": [ 00:06:39.947 { 00:06:39.947 "params": { 00:06:39.947 "trtype": "pcie", 00:06:39.947 "traddr": "0000:00:10.0", 00:06:39.947 "name": "Nvme0" 00:06:39.947 }, 00:06:39.947 "method": "bdev_nvme_attach_controller" 00:06:39.947 }, 00:06:39.947 { 00:06:39.947 "method": "bdev_wait_for_examine" 00:06:39.947 } 00:06:39.947 ] 00:06:39.947 } 00:06:39.947 ] 00:06:39.947 } 00:06:39.947 [2024-11-20 15:54:38.149234] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:39.947 [2024-11-20 15:54:38.149334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60195 ] 00:06:40.205 [2024-11-20 15:54:38.295262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.205 [2024-11-20 15:54:38.360572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.205 [2024-11-20 15:54:38.419783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.465  [2024-11-20T15:54:38.973Z] Copying: 48/48 [kB] (average 46 MBps) 00:06:40.723 00:06:40.723 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.723 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:40.723 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:40.723 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:40.723 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:40.723 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:40.723 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:40.723 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:40.723 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:40.723 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.723 15:54:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.723 [2024-11-20 15:54:38.805471] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:40.723 [2024-11-20 15:54:38.806438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60215 ] 00:06:40.723 { 00:06:40.723 "subsystems": [ 00:06:40.723 { 00:06:40.723 "subsystem": "bdev", 00:06:40.723 "config": [ 00:06:40.723 { 00:06:40.723 "params": { 00:06:40.723 "trtype": "pcie", 00:06:40.723 "traddr": "0000:00:10.0", 00:06:40.723 "name": "Nvme0" 00:06:40.723 }, 00:06:40.723 "method": "bdev_nvme_attach_controller" 00:06:40.723 }, 00:06:40.723 { 00:06:40.723 "method": "bdev_wait_for_examine" 00:06:40.723 } 00:06:40.723 ] 00:06:40.723 } 00:06:40.723 ] 00:06:40.723 } 00:06:40.723 [2024-11-20 15:54:38.958183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.982 [2024-11-20 15:54:39.026043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.982 [2024-11-20 15:54:39.082433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.982  [2024-11-20T15:54:39.490Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:41.240 00:06:41.240 00:06:41.240 real 0m15.047s 00:06:41.240 user 0m11.074s 00:06:41.240 sys 0m5.518s 00:06:41.240 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.240 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.240 ************************************ 00:06:41.240 END TEST dd_rw 00:06:41.240 ************************************ 00:06:41.240 15:54:39 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:41.240 15:54:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.240 15:54:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.240 15:54:39 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.240 ************************************ 00:06:41.240 START TEST dd_rw_offset 00:06:41.240 ************************************ 00:06:41.240 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:06:41.240 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:41.240 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:41.240 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:41.240 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:41.498 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:41.498 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=38aq80a9ntv1gw658ejyvcj452uvfa1z0l4zx4pynbx87rc9tbbvo6cx899m948ale48miver65mufz0t02ycng8cqlubpq5j9ky6n7zswvsrz54q2cezyvtu2u8mjwqbn0fe60pi2fiqw6mlqppa5v9cmv42iojvpmoq7bwujd6cp564c2lrqq9aak2z6q7qu5ckqfr8h4a1qhwbg16c1hval7pxcviwydsiicyiblupib3u1mxqpobwx9zq52qb5gsfk8m0bprdg8tzobdim5sp0d2tllyz7ergyu736m13ebf9g579r8uqifp005m7lpc5ewp4ir6xivef3af461ck87u3mvkdake8b5o9uat5xzd5dgfsilimf0jxt4dh5u0pm9i6vttn5hyk98w3ib5ezfhn3282vs0mpkyybs4iziospsa98cp5nmjudoe5nbefl3jumi0w71r0fdw54ggeo01q7ldsv86bdmr678vjfaqps204pi6o4rntmk2xhqa9jflh85qtv64mcoplwvt5tktumal4w36iosowfwx2zu3rxcvt0wtm900orf6hb7y3yxqzrc0vx13ba79ccjd3frph18q30o5q1g8f4wmhretlh0ooneuhtihmo6e44p8l269kxo3o93j26t3uvpxd3bd9puxfwm0qbtmd28r3d6fwmgedanmroky3ipikdkeavidgh8ufuus0aprx5yx5i291b4pefw0pcidhkv5cq5bubt1tvsf13hifx34jw9i63pydj0p20g87hqv9v9vhrv3ba6zwtt0lm607kw2jk41d8xgj5tl3ueqdryd5g5ak2b7qhilzo7gamqufr8o7gj1xs7tdyliwmzaph95t0gszeu8sjs1b17fuq239xv93f9aky6j45v9o5tsktkvlg8skb1e79tsl6vbj9zyazvpt5n73ziv64wudwl2x85yk3yja9w1caf6bkyn5bmzn8xpnyq30zk1lsxee5k01ln50rmtoygtak2fmgxit59qc89bnhpwt1flhefrvc48uaye3a9fhhwlmrpu2jgp2hdoll3tiwf7gkpcbzrgzyo74qw35e8onol965utxrd8ct4rsy82ff1u70dbrr1ag1cptkvn6lbu054b2nelvqxn5r3hvdaly94bng5u9eaeqv21nw4fdbetx7r0p7iqbd2trf8l71er2d02tq9ue2zi5b9tdumy4mta9ocgbus9mamzo9atthlybj3yfwxpw73g9hmcm809za1uxnpth8myq1ibonjy4gwiuhp13pwl7tbnog6e7yu58u0o9vlk30edynyqg9zgucumullborjmfgfewonnc4xy39f91z1gyvi4apy3uja8s00oggs40o7xpe3th0fa1q2nrn98lnq8gnt4gk5hvu6aofax6jikygctjl0xhtpra00temt9c4yuvljzrwmo39p4t68ti362sj904kc7874aivsszdud6nhvvacywic29p9numu5g30q9g2eycekhtpvdkimz2jvkpuyo1vpa1ylgt8r7qej18m39cjmt2yfc1bdfbcvstst7dzp65ueszycmo4bg6ze8a0ln477rlsujurve8cknb4cav0keiyuanzrp064aa2bzv7c8236wlu1m0ivj5c8kfg4hf4yxy6yle1rp4k0f00jehiw5996ze3tuib774zke05oyfsffzw3h0tqmljdw0gppo3dkrmw5fmxh62zsz1mg0w3huzbyzng9qb5sip2c7vrf7nbmax4tw1tfevkzx0422sf0mjo5ainchm09i1sldo4k9181b52en73g7iko29f3474fx1j30r663ifugrgz5ycdzf36neu14igljh4i2pyrfw9hssvynypkj2m6s3pvqai3mmmpsewd0jg9m88ujbrfd019uw6doq9m3nbzdzwmtthjsn7yjcdtuco2qki9o97md58cfu2quzu2pn10bipfr95cipw7s4j5gg8qs3us9f766yizfd217vrl1tj3helot2w9q8vdic1cxmfqnopi9yi4d6tbe5endchq7szgyo171t8dkjx5rbmzmqn2me0r0bah7jjpbsvzo0p3ufql9rfivc7xf57l19pi66obbq7h45zky67apf3exuyfongd4x45u273b6s8i3xyzl7vvtt9bauwqy9re1vh3ghuphpgfnhvtlqfq0xxwekamrdqbtym771c49awjvk4jrefinuzxtp5r4vk1ikrjanqwzm6bauaf1ynny5u9fw2ksf60n4zch91aza2z3c6xkbt8lai0wff50xu0iqmrttnuxvny1de76hjek8gc7bxeoet3vjzdt6r5lt24gxydb8cvy0n941nrvxvn5eswyra6btppm3sj06rkrjupoqgfiunlk2ukp78h5x43fsfylrtpj77iut0hwrsa8ou4wbm7apdgp9jr52kko2yzdj4zt3acszv8b4mpoahpeey6335cwezowx3gn4n0kz81hzdzbjlp29nyyr8ancvy7747inhxzrtes10sbsi90rua1wh00c5t9gs293yrk8026vhfobvk9aad6zl2vz73bbexv7ikzzn11wi1auv4wb6kzfj70hiyi9vfv8twskub2zabb1c2md145r23zqtl676qde1iy8vlncjriaa0bc6mrp54u25dnh1kpg6yv8f01rzb852204xnhvhjz4klzlcfsuz61qivdbnz0c3gqbjlfrjjvedcvt3h98wgxllgjmr3oknwrlfsvzlwhcpyxufbtuh80wiwielnb9fundh1mrpmn9iemlvmx35fsuhqktylnkwfcl0gekeffo5k97wnfsqmvkrdub6237o3150lky5z3ak2a8jbr9a181j7tgouzim9dd4vywotscxgqpxj4ceueq0sv8y4q4mbbqw0q84csl3v64h8s48unlsmw6jrjwi1b73rbudro6ok6xzwlvdw5o2jnfdzat898gmfo9azc21e4lctrf6h4t6w6f8m9c51rb0m391e7rmix4vzsytzm6w77cg1fh876j2pohu3h9c30jzvfbtienbqbypvi2w00hktubwdgradxtwfnq5n34eu225vf6n9rzirsonhwlrpm1bkc35frx34awrchoydyuubt4qiv9k0vc8rg7p5bnf5fr46lcy22yawdwt6hed7c55s0vp3ljblv87bj2da8l1onfkfz6t2le7j14uypfierx9drs18atssvfxsaonsksk93bbfmaq780s9lfqr7fy4m0h6kn7pzh08yac7zsewxblj2rkx7traswbxoo7qraxar7gmmwoucoujksmiupmuh0nibveti1176p7ydeskskf8rie5zrd0w4jkhg1s6elerlmvogimlmyk86wsluxryow6kz0ynnedhzi9putq3b6hdkshjjwks4c4tq0rs7m6d9434oyqnp1feln991gp4qrh6kc306zeuf3lfspluobf9dxm6hsi750cik1p50x8cl0afx56fuyw7i5cczm4pe37l7zsxbtomlctaoy3ttchkh7kiw7du8w4gp1hiq2gntgzkg0q3a3njgyd73bw1ovb1z2nzp3ixjmqv9ij575fuxucsiupyxo5fsqbpo0lw7uveilrnjkjj1jioq53jncp29vu851ijekvpcyeor9ue623wqhhrm0vnnffk725bk32sdijue0pvkp9crkxywy6ido2ep4dcpy74ydvop88vf5ss8fbz1ml8p5l4up0cdu4vr691nza5wadx4i4u64gq7v8bvfxe732mwn91hojudmi7s79ycysj3i1c0jprob4m63r9fs492ihyujtzt6lv7a92qj5d7us7fj3pcyziuzrol43r96hxt1un1twxo9wdy403d1ejcun0gzikbc2vgr2a6ppc9havezpds8hs84bay17dycbpdyj0kdktehaema0z5kxdv9o73js5v1pf630jbatryz6grspn042usmqhzi8rlmbs1a3svjbczhnn1d9t5d4a336epc3lnl26xv18e4fr4yxo991km 00:06:41.498 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:41.498 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:41.498 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:41.498 15:54:39 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:41.498 { 00:06:41.498 "subsystems": [ 00:06:41.498 { 00:06:41.498 "subsystem": "bdev", 00:06:41.498 "config": [ 00:06:41.498 { 00:06:41.498 "params": { 00:06:41.498 "trtype": "pcie", 00:06:41.498 "traddr": "0000:00:10.0", 00:06:41.498 "name": "Nvme0" 00:06:41.498 }, 00:06:41.498 "method": "bdev_nvme_attach_controller" 00:06:41.498 }, 00:06:41.498 { 00:06:41.498 "method": "bdev_wait_for_examine" 00:06:41.498 } 00:06:41.498 ] 00:06:41.498 } 00:06:41.498 ] 00:06:41.498 } 00:06:41.498 [2024-11-20 15:54:39.561474] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:41.498 [2024-11-20 15:54:39.561592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60241 ] 00:06:41.498 [2024-11-20 15:54:39.716040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.756 [2024-11-20 15:54:39.787427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.756 [2024-11-20 15:54:39.846408] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.756  [2024-11-20T15:54:40.264Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:42.014 00:06:42.014 15:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:42.014 15:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:42.014 15:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:42.014 15:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:42.014 { 00:06:42.014 "subsystems": [ 00:06:42.014 { 00:06:42.014 "subsystem": "bdev", 00:06:42.014 "config": [ 00:06:42.014 { 00:06:42.014 "params": { 00:06:42.014 "trtype": "pcie", 00:06:42.014 "traddr": "0000:00:10.0", 00:06:42.014 "name": "Nvme0" 00:06:42.014 }, 00:06:42.014 "method": "bdev_nvme_attach_controller" 00:06:42.014 }, 00:06:42.014 { 00:06:42.014 "method": "bdev_wait_for_examine" 00:06:42.014 } 00:06:42.014 ] 00:06:42.014 } 00:06:42.014 ] 00:06:42.014 } 00:06:42.014 [2024-11-20 15:54:40.228186] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:42.014 [2024-11-20 15:54:40.228333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60260 ] 00:06:42.273 [2024-11-20 15:54:40.378130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.273 [2024-11-20 15:54:40.442412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.273 [2024-11-20 15:54:40.499283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.531  [2024-11-20T15:54:41.041Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:42.791 00:06:42.791 15:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 38aq80a9ntv1gw658ejyvcj452uvfa1z0l4zx4pynbx87rc9tbbvo6cx899m948ale48miver65mufz0t02ycng8cqlubpq5j9ky6n7zswvsrz54q2cezyvtu2u8mjwqbn0fe60pi2fiqw6mlqppa5v9cmv42iojvpmoq7bwujd6cp564c2lrqq9aak2z6q7qu5ckqfr8h4a1qhwbg16c1hval7pxcviwydsiicyiblupib3u1mxqpobwx9zq52qb5gsfk8m0bprdg8tzobdim5sp0d2tllyz7ergyu736m13ebf9g579r8uqifp005m7lpc5ewp4ir6xivef3af461ck87u3mvkdake8b5o9uat5xzd5dgfsilimf0jxt4dh5u0pm9i6vttn5hyk98w3ib5ezfhn3282vs0mpkyybs4iziospsa98cp5nmjudoe5nbefl3jumi0w71r0fdw54ggeo01q7ldsv86bdmr678vjfaqps204pi6o4rntmk2xhqa9jflh85qtv64mcoplwvt5tktumal4w36iosowfwx2zu3rxcvt0wtm900orf6hb7y3yxqzrc0vx13ba79ccjd3frph18q30o5q1g8f4wmhretlh0ooneuhtihmo6e44p8l269kxo3o93j26t3uvpxd3bd9puxfwm0qbtmd28r3d6fwmgedanmroky3ipikdkeavidgh8ufuus0aprx5yx5i291b4pefw0pcidhkv5cq5bubt1tvsf13hifx34jw9i63pydj0p20g87hqv9v9vhrv3ba6zwtt0lm607kw2jk41d8xgj5tl3ueqdryd5g5ak2b7qhilzo7gamqufr8o7gj1xs7tdyliwmzaph95t0gszeu8sjs1b17fuq239xv93f9aky6j45v9o5tsktkvlg8skb1e79tsl6vbj9zyazvpt5n73ziv64wudwl2x85yk3yja9w1caf6bkyn5bmzn8xpnyq30zk1lsxee5k01ln50rmtoygtak2fmgxit59qc89bnhpwt1flhefrvc48uaye3a9fhhwlmrpu2jgp2hdoll3tiwf7gkpcbzrgzyo74qw35e8onol965utxrd8ct4rsy82ff1u70dbrr1ag1cptkvn6lbu054b2nelvqxn5r3hvdaly94bng5u9eaeqv21nw4fdbetx7r0p7iqbd2trf8l71er2d02tq9ue2zi5b9tdumy4mta9ocgbus9mamzo9atthlybj3yfwxpw73g9hmcm809za1uxnpth8myq1ibonjy4gwiuhp13pwl7tbnog6e7yu58u0o9vlk30edynyqg9zgucumullborjmfgfewonnc4xy39f91z1gyvi4apy3uja8s00oggs40o7xpe3th0fa1q2nrn98lnq8gnt4gk5hvu6aofax6jikygctjl0xhtpra00temt9c4yuvljzrwmo39p4t68ti362sj904kc7874aivsszdud6nhvvacywic29p9numu5g30q9g2eycekhtpvdkimz2jvkpuyo1vpa1ylgt8r7qej18m39cjmt2yfc1bdfbcvstst7dzp65ueszycmo4bg6ze8a0ln477rlsujurve8cknb4cav0keiyuanzrp064aa2bzv7c8236wlu1m0ivj5c8kfg4hf4yxy6yle1rp4k0f00jehiw5996ze3tuib774zke05oyfsffzw3h0tqmljdw0gppo3dkrmw5fmxh62zsz1mg0w3huzbyzng9qb5sip2c7vrf7nbmax4tw1tfevkzx0422sf0mjo5ainchm09i1sldo4k9181b52en73g7iko29f3474fx1j30r663ifugrgz5ycdzf36neu14igljh4i2pyrfw9hssvynypkj2m6s3pvqai3mmmpsewd0jg9m88ujbrfd019uw6doq9m3nbzdzwmtthjsn7yjcdtuco2qki9o97md58cfu2quzu2pn10bipfr95cipw7s4j5gg8qs3us9f766yizfd217vrl1tj3helot2w9q8vdic1cxmfqnopi9yi4d6tbe5endchq7szgyo171t8dkjx5rbmzmqn2me0r0bah7jjpbsvzo0p3ufql9rfivc7xf57l19pi66obbq7h45zky67apf3exuyfongd4x45u273b6s8i3xyzl7vvtt9bauwqy9re1vh3ghuphpgfnhvtlqfq0xxwekamrdqbtym771c49awjvk4jrefinuzxtp5r4vk1ikrjanqwzm6bauaf1ynny5u9fw2ksf60n4zch91aza2z3c6xkbt8lai0wff50xu0iqmrttnuxvny1de76hjek8gc7bxeoet3vjzdt6r5lt24gxydb8cvy0n941nrvxvn5eswyra6btppm3sj06rkrjupoqgfiunlk2ukp78h5x43fsfylrtpj77iut0hwrsa8ou4wbm7apdgp9jr52kko2yzdj4zt3acszv8b4mpoahpeey6335cwezowx3gn4n0kz81hzdzbjlp29nyyr8ancvy7747inhxzrtes10sbsi90rua1wh00c5t9gs293yrk8026vhfobvk9aad6zl2vz73bbexv7ikzzn11wi1auv4wb6kzfj70hiyi9vfv8twskub2zabb1c2md145r23zqtl676qde1iy8vlncjriaa0bc6mrp54u25dnh1kpg6yv8f01rzb852204xnhvhjz4klzlcfsuz61qivdbnz0c3gqbjlfrjjvedcvt3h98wgxllgjmr3oknwrlfsvzlwhcpyxufbtuh80wiwielnb9fundh1mrpmn9iemlvmx35fsuhqktylnkwfcl0gekeffo5k97wnfsqmvkrdub6237o3150lky5z3ak2a8jbr9a181j7tgouzim9dd4vywotscxgqpxj4ceueq0sv8y4q4mbbqw0q84csl3v64h8s48unlsmw6jrjwi1b73rbudro6ok6xzwlvdw5o2jnfdzat898gmfo9azc21e4lctrf6h4t6w6f8m9c51rb0m391e7rmix4vzsytzm6w77cg1fh876j2pohu3h9c30jzvfbtienbqbypvi2w00hktubwdgradxtwfnq5n34eu225vf6n9rzirsonhwlrpm1bkc35frx34awrchoydyuubt4qiv9k0vc8rg7p5bnf5fr46lcy22yawdwt6hed7c55s0vp3ljblv87bj2da8l1onfkfz6t2le7j14uypfierx9drs18atssvfxsaonsksk93bbfmaq780s9lfqr7fy4m0h6kn7pzh08yac7zsewxblj2rkx7traswbxoo7qraxar7gmmwoucoujksmiupmuh0nibveti1176p7ydeskskf8rie5zrd0w4jkhg1s6elerlmvogimlmyk86wsluxryow6kz0ynnedhzi9putq3b6hdkshjjwks4c4tq0rs7m6d9434oyqnp1feln991gp4qrh6kc306zeuf3lfspluobf9dxm6hsi750cik1p50x8cl0afx56fuyw7i5cczm4pe37l7zsxbtomlctaoy3ttchkh7kiw7du8w4gp1hiq2gntgzkg0q3a3njgyd73bw1ovb1z2nzp3ixjmqv9ij575fuxucsiupyxo5fsqbpo0lw7uveilrnjkjj1jioq53jncp29vu851ijekvpcyeor9ue623wqhhrm0vnnffk725bk32sdijue0pvkp9crkxywy6ido2ep4dcpy74ydvop88vf5ss8fbz1ml8p5l4up0cdu4vr691nza5wadx4i4u64gq7v8bvfxe732mwn91hojudmi7s79ycysj3i1c0jprob4m63r9fs492ihyujtzt6lv7a92qj5d7us7fj3pcyziuzrol43r96hxt1un1twxo9wdy403d1ejcun0gzikbc2vgr2a6ppc9havezpds8hs84bay17dycbpdyj0kdktehaema0z5kxdv9o73js5v1pf630jbatryz6grspn042usmqhzi8rlmbs1a3svjbczhnn1d9t5d4a336epc3lnl26xv18e4fr4yxo991km == \3\8\a\q\8\0\a\9\n\t\v\1\g\w\6\5\8\e\j\y\v\c\j\4\5\2\u\v\f\a\1\z\0\l\4\z\x\4\p\y\n\b\x\8\7\r\c\9\t\b\b\v\o\6\c\x\8\9\9\m\9\4\8\a\l\e\4\8\m\i\v\e\r\6\5\m\u\f\z\0\t\0\2\y\c\n\g\8\c\q\l\u\b\p\q\5\j\9\k\y\6\n\7\z\s\w\v\s\r\z\5\4\q\2\c\e\z\y\v\t\u\2\u\8\m\j\w\q\b\n\0\f\e\6\0\p\i\2\f\i\q\w\6\m\l\q\p\p\a\5\v\9\c\m\v\4\2\i\o\j\v\p\m\o\q\7\b\w\u\j\d\6\c\p\5\6\4\c\2\l\r\q\q\9\a\a\k\2\z\6\q\7\q\u\5\c\k\q\f\r\8\h\4\a\1\q\h\w\b\g\1\6\c\1\h\v\a\l\7\p\x\c\v\i\w\y\d\s\i\i\c\y\i\b\l\u\p\i\b\3\u\1\m\x\q\p\o\b\w\x\9\z\q\5\2\q\b\5\g\s\f\k\8\m\0\b\p\r\d\g\8\t\z\o\b\d\i\m\5\s\p\0\d\2\t\l\l\y\z\7\e\r\g\y\u\7\3\6\m\1\3\e\b\f\9\g\5\7\9\r\8\u\q\i\f\p\0\0\5\m\7\l\p\c\5\e\w\p\4\i\r\6\x\i\v\e\f\3\a\f\4\6\1\c\k\8\7\u\3\m\v\k\d\a\k\e\8\b\5\o\9\u\a\t\5\x\z\d\5\d\g\f\s\i\l\i\m\f\0\j\x\t\4\d\h\5\u\0\p\m\9\i\6\v\t\t\n\5\h\y\k\9\8\w\3\i\b\5\e\z\f\h\n\3\2\8\2\v\s\0\m\p\k\y\y\b\s\4\i\z\i\o\s\p\s\a\9\8\c\p\5\n\m\j\u\d\o\e\5\n\b\e\f\l\3\j\u\m\i\0\w\7\1\r\0\f\d\w\5\4\g\g\e\o\0\1\q\7\l\d\s\v\8\6\b\d\m\r\6\7\8\v\j\f\a\q\p\s\2\0\4\p\i\6\o\4\r\n\t\m\k\2\x\h\q\a\9\j\f\l\h\8\5\q\t\v\6\4\m\c\o\p\l\w\v\t\5\t\k\t\u\m\a\l\4\w\3\6\i\o\s\o\w\f\w\x\2\z\u\3\r\x\c\v\t\0\w\t\m\9\0\0\o\r\f\6\h\b\7\y\3\y\x\q\z\r\c\0\v\x\1\3\b\a\7\9\c\c\j\d\3\f\r\p\h\1\8\q\3\0\o\5\q\1\g\8\f\4\w\m\h\r\e\t\l\h\0\o\o\n\e\u\h\t\i\h\m\o\6\e\4\4\p\8\l\2\6\9\k\x\o\3\o\9\3\j\2\6\t\3\u\v\p\x\d\3\b\d\9\p\u\x\f\w\m\0\q\b\t\m\d\2\8\r\3\d\6\f\w\m\g\e\d\a\n\m\r\o\k\y\3\i\p\i\k\d\k\e\a\v\i\d\g\h\8\u\f\u\u\s\0\a\p\r\x\5\y\x\5\i\2\9\1\b\4\p\e\f\w\0\p\c\i\d\h\k\v\5\c\q\5\b\u\b\t\1\t\v\s\f\1\3\h\i\f\x\3\4\j\w\9\i\6\3\p\y\d\j\0\p\2\0\g\8\7\h\q\v\9\v\9\v\h\r\v\3\b\a\6\z\w\t\t\0\l\m\6\0\7\k\w\2\j\k\4\1\d\8\x\g\j\5\t\l\3\u\e\q\d\r\y\d\5\g\5\a\k\2\b\7\q\h\i\l\z\o\7\g\a\m\q\u\f\r\8\o\7\g\j\1\x\s\7\t\d\y\l\i\w\m\z\a\p\h\9\5\t\0\g\s\z\e\u\8\s\j\s\1\b\1\7\f\u\q\2\3\9\x\v\9\3\f\9\a\k\y\6\j\4\5\v\9\o\5\t\s\k\t\k\v\l\g\8\s\k\b\1\e\7\9\t\s\l\6\v\b\j\9\z\y\a\z\v\p\t\5\n\7\3\z\i\v\6\4\w\u\d\w\l\2\x\8\5\y\k\3\y\j\a\9\w\1\c\a\f\6\b\k\y\n\5\b\m\z\n\8\x\p\n\y\q\3\0\z\k\1\l\s\x\e\e\5\k\0\1\l\n\5\0\r\m\t\o\y\g\t\a\k\2\f\m\g\x\i\t\5\9\q\c\8\9\b\n\h\p\w\t\1\f\l\h\e\f\r\v\c\4\8\u\a\y\e\3\a\9\f\h\h\w\l\m\r\p\u\2\j\g\p\2\h\d\o\l\l\3\t\i\w\f\7\g\k\p\c\b\z\r\g\z\y\o\7\4\q\w\3\5\e\8\o\n\o\l\9\6\5\u\t\x\r\d\8\c\t\4\r\s\y\8\2\f\f\1\u\7\0\d\b\r\r\1\a\g\1\c\p\t\k\v\n\6\l\b\u\0\5\4\b\2\n\e\l\v\q\x\n\5\r\3\h\v\d\a\l\y\9\4\b\n\g\5\u\9\e\a\e\q\v\2\1\n\w\4\f\d\b\e\t\x\7\r\0\p\7\i\q\b\d\2\t\r\f\8\l\7\1\e\r\2\d\0\2\t\q\9\u\e\2\z\i\5\b\9\t\d\u\m\y\4\m\t\a\9\o\c\g\b\u\s\9\m\a\m\z\o\9\a\t\t\h\l\y\b\j\3\y\f\w\x\p\w\7\3\g\9\h\m\c\m\8\0\9\z\a\1\u\x\n\p\t\h\8\m\y\q\1\i\b\o\n\j\y\4\g\w\i\u\h\p\1\3\p\w\l\7\t\b\n\o\g\6\e\7\y\u\5\8\u\0\o\9\v\l\k\3\0\e\d\y\n\y\q\g\9\z\g\u\c\u\m\u\l\l\b\o\r\j\m\f\g\f\e\w\o\n\n\c\4\x\y\3\9\f\9\1\z\1\g\y\v\i\4\a\p\y\3\u\j\a\8\s\0\0\o\g\g\s\4\0\o\7\x\p\e\3\t\h\0\f\a\1\q\2\n\r\n\9\8\l\n\q\8\g\n\t\4\g\k\5\h\v\u\6\a\o\f\a\x\6\j\i\k\y\g\c\t\j\l\0\x\h\t\p\r\a\0\0\t\e\m\t\9\c\4\y\u\v\l\j\z\r\w\m\o\3\9\p\4\t\6\8\t\i\3\6\2\s\j\9\0\4\k\c\7\8\7\4\a\i\v\s\s\z\d\u\d\6\n\h\v\v\a\c\y\w\i\c\2\9\p\9\n\u\m\u\5\g\3\0\q\9\g\2\e\y\c\e\k\h\t\p\v\d\k\i\m\z\2\j\v\k\p\u\y\o\1\v\p\a\1\y\l\g\t\8\r\7\q\e\j\1\8\m\3\9\c\j\m\t\2\y\f\c\1\b\d\f\b\c\v\s\t\s\t\7\d\z\p\6\5\u\e\s\z\y\c\m\o\4\b\g\6\z\e\8\a\0\l\n\4\7\7\r\l\s\u\j\u\r\v\e\8\c\k\n\b\4\c\a\v\0\k\e\i\y\u\a\n\z\r\p\0\6\4\a\a\2\b\z\v\7\c\8\2\3\6\w\l\u\1\m\0\i\v\j\5\c\8\k\f\g\4\h\f\4\y\x\y\6\y\l\e\1\r\p\4\k\0\f\0\0\j\e\h\i\w\5\9\9\6\z\e\3\t\u\i\b\7\7\4\z\k\e\0\5\o\y\f\s\f\f\z\w\3\h\0\t\q\m\l\j\d\w\0\g\p\p\o\3\d\k\r\m\w\5\f\m\x\h\6\2\z\s\z\1\m\g\0\w\3\h\u\z\b\y\z\n\g\9\q\b\5\s\i\p\2\c\7\v\r\f\7\n\b\m\a\x\4\t\w\1\t\f\e\v\k\z\x\0\4\2\2\s\f\0\m\j\o\5\a\i\n\c\h\m\0\9\i\1\s\l\d\o\4\k\9\1\8\1\b\5\2\e\n\7\3\g\7\i\k\o\2\9\f\3\4\7\4\f\x\1\j\3\0\r\6\6\3\i\f\u\g\r\g\z\5\y\c\d\z\f\3\6\n\e\u\1\4\i\g\l\j\h\4\i\2\p\y\r\f\w\9\h\s\s\v\y\n\y\p\k\j\2\m\6\s\3\p\v\q\a\i\3\m\m\m\p\s\e\w\d\0\j\g\9\m\8\8\u\j\b\r\f\d\0\1\9\u\w\6\d\o\q\9\m\3\n\b\z\d\z\w\m\t\t\h\j\s\n\7\y\j\c\d\t\u\c\o\2\q\k\i\9\o\9\7\m\d\5\8\c\f\u\2\q\u\z\u\2\p\n\1\0\b\i\p\f\r\9\5\c\i\p\w\7\s\4\j\5\g\g\8\q\s\3\u\s\9\f\7\6\6\y\i\z\f\d\2\1\7\v\r\l\1\t\j\3\h\e\l\o\t\2\w\9\q\8\v\d\i\c\1\c\x\m\f\q\n\o\p\i\9\y\i\4\d\6\t\b\e\5\e\n\d\c\h\q\7\s\z\g\y\o\1\7\1\t\8\d\k\j\x\5\r\b\m\z\m\q\n\2\m\e\0\r\0\b\a\h\7\j\j\p\b\s\v\z\o\0\p\3\u\f\q\l\9\r\f\i\v\c\7\x\f\5\7\l\1\9\p\i\6\6\o\b\b\q\7\h\4\5\z\k\y\6\7\a\p\f\3\e\x\u\y\f\o\n\g\d\4\x\4\5\u\2\7\3\b\6\s\8\i\3\x\y\z\l\7\v\v\t\t\9\b\a\u\w\q\y\9\r\e\1\v\h\3\g\h\u\p\h\p\g\f\n\h\v\t\l\q\f\q\0\x\x\w\e\k\a\m\r\d\q\b\t\y\m\7\7\1\c\4\9\a\w\j\v\k\4\j\r\e\f\i\n\u\z\x\t\p\5\r\4\v\k\1\i\k\r\j\a\n\q\w\z\m\6\b\a\u\a\f\1\y\n\n\y\5\u\9\f\w\2\k\s\f\6\0\n\4\z\c\h\9\1\a\z\a\2\z\3\c\6\x\k\b\t\8\l\a\i\0\w\f\f\5\0\x\u\0\i\q\m\r\t\t\n\u\x\v\n\y\1\d\e\7\6\h\j\e\k\8\g\c\7\b\x\e\o\e\t\3\v\j\z\d\t\6\r\5\l\t\2\4\g\x\y\d\b\8\c\v\y\0\n\9\4\1\n\r\v\x\v\n\5\e\s\w\y\r\a\6\b\t\p\p\m\3\s\j\0\6\r\k\r\j\u\p\o\q\g\f\i\u\n\l\k\2\u\k\p\7\8\h\5\x\4\3\f\s\f\y\l\r\t\p\j\7\7\i\u\t\0\h\w\r\s\a\8\o\u\4\w\b\m\7\a\p\d\g\p\9\j\r\5\2\k\k\o\2\y\z\d\j\4\z\t\3\a\c\s\z\v\8\b\4\m\p\o\a\h\p\e\e\y\6\3\3\5\c\w\e\z\o\w\x\3\g\n\4\n\0\k\z\8\1\h\z\d\z\b\j\l\p\2\9\n\y\y\r\8\a\n\c\v\y\7\7\4\7\i\n\h\x\z\r\t\e\s\1\0\s\b\s\i\9\0\r\u\a\1\w\h\0\0\c\5\t\9\g\s\2\9\3\y\r\k\8\0\2\6\v\h\f\o\b\v\k\9\a\a\d\6\z\l\2\v\z\7\3\b\b\e\x\v\7\i\k\z\z\n\1\1\w\i\1\a\u\v\4\w\b\6\k\z\f\j\7\0\h\i\y\i\9\v\f\v\8\t\w\s\k\u\b\2\z\a\b\b\1\c\2\m\d\1\4\5\r\2\3\z\q\t\l\6\7\6\q\d\e\1\i\y\8\v\l\n\c\j\r\i\a\a\0\b\c\6\m\r\p\5\4\u\2\5\d\n\h\1\k\p\g\6\y\v\8\f\0\1\r\z\b\8\5\2\2\0\4\x\n\h\v\h\j\z\4\k\l\z\l\c\f\s\u\z\6\1\q\i\v\d\b\n\z\0\c\3\g\q\b\j\l\f\r\j\j\v\e\d\c\v\t\3\h\9\8\w\g\x\l\l\g\j\m\r\3\o\k\n\w\r\l\f\s\v\z\l\w\h\c\p\y\x\u\f\b\t\u\h\8\0\w\i\w\i\e\l\n\b\9\f\u\n\d\h\1\m\r\p\m\n\9\i\e\m\l\v\m\x\3\5\f\s\u\h\q\k\t\y\l\n\k\w\f\c\l\0\g\e\k\e\f\f\o\5\k\9\7\w\n\f\s\q\m\v\k\r\d\u\b\6\2\3\7\o\3\1\5\0\l\k\y\5\z\3\a\k\2\a\8\j\b\r\9\a\1\8\1\j\7\t\g\o\u\z\i\m\9\d\d\4\v\y\w\o\t\s\c\x\g\q\p\x\j\4\c\e\u\e\q\0\s\v\8\y\4\q\4\m\b\b\q\w\0\q\8\4\c\s\l\3\v\6\4\h\8\s\4\8\u\n\l\s\m\w\6\j\r\j\w\i\1\b\7\3\r\b\u\d\r\o\6\o\k\6\x\z\w\l\v\d\w\5\o\2\j\n\f\d\z\a\t\8\9\8\g\m\f\o\9\a\z\c\2\1\e\4\l\c\t\r\f\6\h\4\t\6\w\6\f\8\m\9\c\5\1\r\b\0\m\3\9\1\e\7\r\m\i\x\4\v\z\s\y\t\z\m\6\w\7\7\c\g\1\f\h\8\7\6\j\2\p\o\h\u\3\h\9\c\3\0\j\z\v\f\b\t\i\e\n\b\q\b\y\p\v\i\2\w\0\0\h\k\t\u\b\w\d\g\r\a\d\x\t\w\f\n\q\5\n\3\4\e\u\2\2\5\v\f\6\n\9\r\z\i\r\s\o\n\h\w\l\r\p\m\1\b\k\c\3\5\f\r\x\3\4\a\w\r\c\h\o\y\d\y\u\u\b\t\4\q\i\v\9\k\0\v\c\8\r\g\7\p\5\b\n\f\5\f\r\4\6\l\c\y\2\2\y\a\w\d\w\t\6\h\e\d\7\c\5\5\s\0\v\p\3\l\j\b\l\v\8\7\b\j\2\d\a\8\l\1\o\n\f\k\f\z\6\t\2\l\e\7\j\1\4\u\y\p\f\i\e\r\x\9\d\r\s\1\8\a\t\s\s\v\f\x\s\a\o\n\s\k\s\k\9\3\b\b\f\m\a\q\7\8\0\s\9\l\f\q\r\7\f\y\4\m\0\h\6\k\n\7\p\z\h\0\8\y\a\c\7\z\s\e\w\x\b\l\j\2\r\k\x\7\t\r\a\s\w\b\x\o\o\7\q\r\a\x\a\r\7\g\m\m\w\o\u\c\o\u\j\k\s\m\i\u\p\m\u\h\0\n\i\b\v\e\t\i\1\1\7\6\p\7\y\d\e\s\k\s\k\f\8\r\i\e\5\z\r\d\0\w\4\j\k\h\g\1\s\6\e\l\e\r\l\m\v\o\g\i\m\l\m\y\k\8\6\w\s\l\u\x\r\y\o\w\6\k\z\0\y\n\n\e\d\h\z\i\9\p\u\t\q\3\b\6\h\d\k\s\h\j\j\w\k\s\4\c\4\t\q\0\r\s\7\m\6\d\9\4\3\4\o\y\q\n\p\1\f\e\l\n\9\9\1\g\p\4\q\r\h\6\k\c\3\0\6\z\e\u\f\3\l\f\s\p\l\u\o\b\f\9\d\x\m\6\h\s\i\7\5\0\c\i\k\1\p\5\0\x\8\c\l\0\a\f\x\5\6\f\u\y\w\7\i\5\c\c\z\m\4\p\e\3\7\l\7\z\s\x\b\t\o\m\l\c\t\a\o\y\3\t\t\c\h\k\h\7\k\i\w\7\d\u\8\w\4\g\p\1\h\i\q\2\g\n\t\g\z\k\g\0\q\3\a\3\n\j\g\y\d\7\3\b\w\1\o\v\b\1\z\2\n\z\p\3\i\x\j\m\q\v\9\i\j\5\7\5\f\u\x\u\c\s\i\u\p\y\x\o\5\f\s\q\b\p\o\0\l\w\7\u\v\e\i\l\r\n\j\k\j\j\1\j\i\o\q\5\3\j\n\c\p\2\9\v\u\8\5\1\i\j\e\k\v\p\c\y\e\o\r\9\u\e\6\2\3\w\q\h\h\r\m\0\v\n\n\f\f\k\7\2\5\b\k\3\2\s\d\i\j\u\e\0\p\v\k\p\9\c\r\k\x\y\w\y\6\i\d\o\2\e\p\4\d\c\p\y\7\4\y\d\v\o\p\8\8\v\f\5\s\s\8\f\b\z\1\m\l\8\p\5\l\4\u\p\0\c\d\u\4\v\r\6\9\1\n\z\a\5\w\a\d\x\4\i\4\u\6\4\g\q\7\v\8\b\v\f\x\e\7\3\2\m\w\n\9\1\h\o\j\u\d\m\i\7\s\7\9\y\c\y\s\j\3\i\1\c\0\j\p\r\o\b\4\m\6\3\r\9\f\s\4\9\2\i\h\y\u\j\t\z\t\6\l\v\7\a\9\2\q\j\5\d\7\u\s\7\f\j\3\p\c\y\z\i\u\z\r\o\l\4\3\r\9\6\h\x\t\1\u\n\1\t\w\x\o\9\w\d\y\4\0\3\d\1\e\j\c\u\n\0\g\z\i\k\b\c\2\v\g\r\2\a\6\p\p\c\9\h\a\v\e\z\p\d\s\8\h\s\8\4\b\a\y\1\7\d\y\c\b\p\d\y\j\0\k\d\k\t\e\h\a\e\m\a\0\z\5\k\x\d\v\9\o\7\3\j\s\5\v\1\p\f\6\3\0\j\b\a\t\r\y\z\6\g\r\s\p\n\0\4\2\u\s\m\q\h\z\i\8\r\l\m\b\s\1\a\3\s\v\j\b\c\z\h\n\n\1\d\9\t\5\d\4\a\3\3\6\e\p\c\3\l\n\l\2\6\x\v\1\8\e\4\f\r\4\y\x\o\9\9\1\k\m ]] 00:06:42.792 00:06:42.792 real 0m1.377s 00:06:42.792 user 0m0.951s 00:06:42.792 sys 0m0.619s 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:42.792 ************************************ 00:06:42.792 END TEST dd_rw_offset 00:06:42.792 ************************************ 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:42.792 15:54:40 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.792 { 00:06:42.792 "subsystems": [ 00:06:42.792 { 00:06:42.792 "subsystem": "bdev", 00:06:42.792 "config": [ 00:06:42.792 { 00:06:42.792 "params": { 00:06:42.792 "trtype": "pcie", 00:06:42.792 "traddr": "0000:00:10.0", 00:06:42.792 "name": "Nvme0" 00:06:42.792 }, 00:06:42.792 "method": "bdev_nvme_attach_controller" 00:06:42.792 }, 00:06:42.792 { 00:06:42.792 "method": "bdev_wait_for_examine" 00:06:42.793 } 00:06:42.793 ] 00:06:42.793 } 00:06:42.793 ] 00:06:42.793 } 00:06:42.793 [2024-11-20 15:54:40.921954] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:42.793 [2024-11-20 15:54:40.922066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60296 ] 00:06:43.052 [2024-11-20 15:54:41.073132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.052 [2024-11-20 15:54:41.139055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.052 [2024-11-20 15:54:41.196338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.311  [2024-11-20T15:54:41.561Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:43.311 00:06:43.311 15:54:41 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.311 00:06:43.311 real 0m18.305s 00:06:43.311 user 0m13.171s 00:06:43.311 sys 0m6.822s 00:06:43.311 15:54:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.311 15:54:41 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.311 ************************************ 00:06:43.311 END TEST spdk_dd_basic_rw 00:06:43.311 ************************************ 00:06:43.570 15:54:41 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:43.570 15:54:41 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.570 15:54:41 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.570 15:54:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:43.570 ************************************ 00:06:43.570 START TEST spdk_dd_posix 00:06:43.570 ************************************ 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:43.570 * Looking for test storage... 00:06:43.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.570 --rc genhtml_branch_coverage=1 00:06:43.570 --rc genhtml_function_coverage=1 00:06:43.570 --rc genhtml_legend=1 00:06:43.570 --rc geninfo_all_blocks=1 00:06:43.570 --rc geninfo_unexecuted_blocks=1 00:06:43.570 00:06:43.570 ' 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.570 --rc genhtml_branch_coverage=1 00:06:43.570 --rc genhtml_function_coverage=1 00:06:43.570 --rc genhtml_legend=1 00:06:43.570 --rc geninfo_all_blocks=1 00:06:43.570 --rc geninfo_unexecuted_blocks=1 00:06:43.570 00:06:43.570 ' 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.570 --rc genhtml_branch_coverage=1 00:06:43.570 --rc genhtml_function_coverage=1 00:06:43.570 --rc genhtml_legend=1 00:06:43.570 --rc geninfo_all_blocks=1 00:06:43.570 --rc geninfo_unexecuted_blocks=1 00:06:43.570 00:06:43.570 ' 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.570 --rc genhtml_branch_coverage=1 00:06:43.570 --rc genhtml_function_coverage=1 00:06:43.570 --rc genhtml_legend=1 00:06:43.570 --rc geninfo_all_blocks=1 00:06:43.570 --rc geninfo_unexecuted_blocks=1 00:06:43.570 00:06:43.570 ' 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.570 15:54:41 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:43.571 * First test run, liburing in use 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:43.571 ************************************ 00:06:43.571 START TEST dd_flag_append 00:06:43.571 ************************************ 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=fx9m4i5pg9cikjpr9fhkqsponsrohbpz 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=6tb4kmah5bgubdmb6e2tf6mdwb94juq6 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s fx9m4i5pg9cikjpr9fhkqsponsrohbpz 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 6tb4kmah5bgubdmb6e2tf6mdwb94juq6 00:06:43.571 15:54:41 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:43.828 [2024-11-20 15:54:41.871468] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:43.828 [2024-11-20 15:54:41.871578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60365 ] 00:06:43.828 [2024-11-20 15:54:42.017882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.088 [2024-11-20 15:54:42.083851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.088 [2024-11-20 15:54:42.139669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.088  [2024-11-20T15:54:42.597Z] Copying: 32/32 [B] (average 31 kBps) 00:06:44.347 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 6tb4kmah5bgubdmb6e2tf6mdwb94juq6fx9m4i5pg9cikjpr9fhkqsponsrohbpz == \6\t\b\4\k\m\a\h\5\b\g\u\b\d\m\b\6\e\2\t\f\6\m\d\w\b\9\4\j\u\q\6\f\x\9\m\4\i\5\p\g\9\c\i\k\j\p\r\9\f\h\k\q\s\p\o\n\s\r\o\h\b\p\z ]] 00:06:44.347 00:06:44.347 real 0m0.584s 00:06:44.347 user 0m0.332s 00:06:44.347 sys 0m0.289s 00:06:44.347 ************************************ 00:06:44.347 END TEST dd_flag_append 00:06:44.347 ************************************ 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:44.347 ************************************ 00:06:44.347 START TEST dd_flag_directory 00:06:44.347 ************************************ 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.347 15:54:42 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.347 [2024-11-20 15:54:42.505321] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:44.347 [2024-11-20 15:54:42.505421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60393 ] 00:06:44.605 [2024-11-20 15:54:42.649442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.605 [2024-11-20 15:54:42.716600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.605 [2024-11-20 15:54:42.772972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.605 [2024-11-20 15:54:42.814782] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.605 [2024-11-20 15:54:42.814856] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:44.605 [2024-11-20 15:54:42.814875] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.863 [2024-11-20 15:54:42.938857] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.863 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:44.863 [2024-11-20 15:54:43.084124] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:44.863 [2024-11-20 15:54:43.084335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60408 ] 00:06:45.122 [2024-11-20 15:54:43.241479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.122 [2024-11-20 15:54:43.308706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.122 [2024-11-20 15:54:43.366469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.380 [2024-11-20 15:54:43.408978] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:45.380 [2024-11-20 15:54:43.409059] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:45.380 [2024-11-20 15:54:43.409103] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.380 [2024-11-20 15:54:43.542951] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:45.380 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:06:45.380 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.380 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:06:45.380 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:06:45.380 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:06:45.380 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.380 00:06:45.380 real 0m1.164s 00:06:45.380 user 0m0.652s 00:06:45.380 sys 0m0.300s 00:06:45.380 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.380 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:45.380 ************************************ 00:06:45.380 END TEST dd_flag_directory 00:06:45.380 ************************************ 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:45.639 ************************************ 00:06:45.639 START TEST dd_flag_nofollow 00:06:45.639 ************************************ 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.639 15:54:43 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:45.639 [2024-11-20 15:54:43.749402] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:45.639 [2024-11-20 15:54:43.749562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60437 ] 00:06:45.898 [2024-11-20 15:54:43.906587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.898 [2024-11-20 15:54:43.973105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.898 [2024-11-20 15:54:44.029143] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.898 [2024-11-20 15:54:44.071437] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:45.898 [2024-11-20 15:54:44.071520] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:45.898 [2024-11-20 15:54:44.071556] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.214 [2024-11-20 15:54:44.198084] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.214 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.215 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:46.215 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.215 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.215 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:46.215 [2024-11-20 15:54:44.332551] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:46.215 [2024-11-20 15:54:44.332667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60446 ] 00:06:46.474 [2024-11-20 15:54:44.481255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.474 [2024-11-20 15:54:44.548212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.474 [2024-11-20 15:54:44.604587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.474 [2024-11-20 15:54:44.645248] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:46.474 [2024-11-20 15:54:44.645300] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:46.474 [2024-11-20 15:54:44.645319] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.732 [2024-11-20 15:54:44.772970] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:46.732 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:06:46.732 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.732 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:06:46.732 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:06:46.732 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:06:46.732 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.732 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:46.732 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:46.732 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:46.732 15:54:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.732 [2024-11-20 15:54:44.927014] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:46.732 [2024-11-20 15:54:44.927190] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60454 ] 00:06:46.990 [2024-11-20 15:54:45.082748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.990 [2024-11-20 15:54:45.148266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.990 [2024-11-20 15:54:45.203859] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.248  [2024-11-20T15:54:45.498Z] Copying: 512/512 [B] (average 500 kBps) 00:06:47.248 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 4w15v20lkxygir68y228tg1dxtnw2p3vwd9l3hxbigt8af8l5ubh8uhwxat2ihxv4ofsa9pxtauukp43fvt2yslz8ntowus4b9f6pyu1n2ixb6u5e2b6jz4o7dpi6tbpza9j4qz5kcu4nua3v7rn334dsv9ipe7usb675y0437037b8oldtv9tc8darmog9dt9hzs64fgcr8axpkjfmv92xufwclt192kqy6for3ab49nox0l879xk6c0x64l5svvmd7w2l35xrtu0wjkprx5u2tlects9exjlfe6npk199514rssqubw2gark457aos9usezs1gk6o1gz7kuoc6567dk584th8ke7hd7qrb5skfi1wn6vl28rl7sbs2chg2cejau3npmgiv8yq40tq7grbulttgphqp7rozzejdcs0v0du3kux6l80pxzhdyy6kjorpydkvyzjs557plt4dl2ih5fpdpjxcm44191qpo83f3j1g7zv9tg4lzo4bivwi == \4\w\1\5\v\2\0\l\k\x\y\g\i\r\6\8\y\2\2\8\t\g\1\d\x\t\n\w\2\p\3\v\w\d\9\l\3\h\x\b\i\g\t\8\a\f\8\l\5\u\b\h\8\u\h\w\x\a\t\2\i\h\x\v\4\o\f\s\a\9\p\x\t\a\u\u\k\p\4\3\f\v\t\2\y\s\l\z\8\n\t\o\w\u\s\4\b\9\f\6\p\y\u\1\n\2\i\x\b\6\u\5\e\2\b\6\j\z\4\o\7\d\p\i\6\t\b\p\z\a\9\j\4\q\z\5\k\c\u\4\n\u\a\3\v\7\r\n\3\3\4\d\s\v\9\i\p\e\7\u\s\b\6\7\5\y\0\4\3\7\0\3\7\b\8\o\l\d\t\v\9\t\c\8\d\a\r\m\o\g\9\d\t\9\h\z\s\6\4\f\g\c\r\8\a\x\p\k\j\f\m\v\9\2\x\u\f\w\c\l\t\1\9\2\k\q\y\6\f\o\r\3\a\b\4\9\n\o\x\0\l\8\7\9\x\k\6\c\0\x\6\4\l\5\s\v\v\m\d\7\w\2\l\3\5\x\r\t\u\0\w\j\k\p\r\x\5\u\2\t\l\e\c\t\s\9\e\x\j\l\f\e\6\n\p\k\1\9\9\5\1\4\r\s\s\q\u\b\w\2\g\a\r\k\4\5\7\a\o\s\9\u\s\e\z\s\1\g\k\6\o\1\g\z\7\k\u\o\c\6\5\6\7\d\k\5\8\4\t\h\8\k\e\7\h\d\7\q\r\b\5\s\k\f\i\1\w\n\6\v\l\2\8\r\l\7\s\b\s\2\c\h\g\2\c\e\j\a\u\3\n\p\m\g\i\v\8\y\q\4\0\t\q\7\g\r\b\u\l\t\t\g\p\h\q\p\7\r\o\z\z\e\j\d\c\s\0\v\0\d\u\3\k\u\x\6\l\8\0\p\x\z\h\d\y\y\6\k\j\o\r\p\y\d\k\v\y\z\j\s\5\5\7\p\l\t\4\d\l\2\i\h\5\f\p\d\p\j\x\c\m\4\4\1\9\1\q\p\o\8\3\f\3\j\1\g\7\z\v\9\t\g\4\l\z\o\4\b\i\v\w\i ]] 00:06:47.248 00:06:47.248 real 0m1.773s 00:06:47.248 user 0m0.982s 00:06:47.248 sys 0m0.608s 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:47.248 ************************************ 00:06:47.248 END TEST dd_flag_nofollow 00:06:47.248 ************************************ 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:47.248 ************************************ 00:06:47.248 START TEST dd_flag_noatime 00:06:47.248 ************************************ 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:47.248 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:47.507 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:47.507 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732118085 00:06:47.507 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:47.507 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732118085 00:06:47.507 15:54:45 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:48.451 15:54:46 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.451 [2024-11-20 15:54:46.577700] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:48.451 [2024-11-20 15:54:46.577826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60496 ] 00:06:48.710 [2024-11-20 15:54:46.730706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.710 [2024-11-20 15:54:46.792575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.710 [2024-11-20 15:54:46.852447] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.710  [2024-11-20T15:54:47.218Z] Copying: 512/512 [B] (average 500 kBps) 00:06:48.968 00:06:48.968 15:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:48.968 15:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732118085 )) 00:06:48.968 15:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.968 15:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732118085 )) 00:06:48.968 15:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.968 [2024-11-20 15:54:47.129462] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:48.968 [2024-11-20 15:54:47.129549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60510 ] 00:06:49.227 [2024-11-20 15:54:47.271983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.227 [2024-11-20 15:54:47.323135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.227 [2024-11-20 15:54:47.377964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.227  [2024-11-20T15:54:47.736Z] Copying: 512/512 [B] (average 500 kBps) 00:06:49.486 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732118087 )) 00:06:49.486 00:06:49.486 real 0m2.127s 00:06:49.486 user 0m0.595s 00:06:49.486 sys 0m0.591s 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:49.486 ************************************ 00:06:49.486 END TEST dd_flag_noatime 00:06:49.486 ************************************ 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:49.486 ************************************ 00:06:49.486 START TEST dd_flags_misc 00:06:49.486 ************************************ 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:49.486 15:54:47 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:49.486 [2024-11-20 15:54:47.729106] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:49.486 [2024-11-20 15:54:47.729225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60538 ] 00:06:49.744 [2024-11-20 15:54:47.878722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.744 [2024-11-20 15:54:47.944433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.004 [2024-11-20 15:54:48.002977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.004  [2024-11-20T15:54:48.254Z] Copying: 512/512 [B] (average 500 kBps) 00:06:50.004 00:06:50.004 15:54:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kir14isg7cet6d2865od1d8rmy9kb4ngrwqgymkwgjqlp67gnn03n54z31dmmdlsdyl8ykdag4m84gubxbdp5s99yu5z378nnpbkrjh0q4xztsr2ytvnuw6je8p0d4rzak61zfgs43708wt3u60jdokqtdz61vrkvskkspszjoneeb1a462ib0ind8xxo6hv5wlszs9z9cl9yzpalrm52o8c7irtqsbra7svxzwz58fc73v2mx6ztpth5e8z9dsv3r3veparpxdgr88gzm6szn89famnz5ct54ch49rg6t5msga0d3lyf4ynbi20ajd5h966x5zbwgvhmmell1yzeffm36r2u3x3z5tvz2zv4k9ysqe7n8shj66az1r3r8kivm9g9aoviywtir876k9si589a9jzkcleqx5awko5rptzmiillhdt1m4j6zw9zl54fd3b9fhan842zejm6ybnyrm2ymfjq8uvtbr9yr0xur02mu0smn7806wee04tbgty == \k\i\r\1\4\i\s\g\7\c\e\t\6\d\2\8\6\5\o\d\1\d\8\r\m\y\9\k\b\4\n\g\r\w\q\g\y\m\k\w\g\j\q\l\p\6\7\g\n\n\0\3\n\5\4\z\3\1\d\m\m\d\l\s\d\y\l\8\y\k\d\a\g\4\m\8\4\g\u\b\x\b\d\p\5\s\9\9\y\u\5\z\3\7\8\n\n\p\b\k\r\j\h\0\q\4\x\z\t\s\r\2\y\t\v\n\u\w\6\j\e\8\p\0\d\4\r\z\a\k\6\1\z\f\g\s\4\3\7\0\8\w\t\3\u\6\0\j\d\o\k\q\t\d\z\6\1\v\r\k\v\s\k\k\s\p\s\z\j\o\n\e\e\b\1\a\4\6\2\i\b\0\i\n\d\8\x\x\o\6\h\v\5\w\l\s\z\s\9\z\9\c\l\9\y\z\p\a\l\r\m\5\2\o\8\c\7\i\r\t\q\s\b\r\a\7\s\v\x\z\w\z\5\8\f\c\7\3\v\2\m\x\6\z\t\p\t\h\5\e\8\z\9\d\s\v\3\r\3\v\e\p\a\r\p\x\d\g\r\8\8\g\z\m\6\s\z\n\8\9\f\a\m\n\z\5\c\t\5\4\c\h\4\9\r\g\6\t\5\m\s\g\a\0\d\3\l\y\f\4\y\n\b\i\2\0\a\j\d\5\h\9\6\6\x\5\z\b\w\g\v\h\m\m\e\l\l\1\y\z\e\f\f\m\3\6\r\2\u\3\x\3\z\5\t\v\z\2\z\v\4\k\9\y\s\q\e\7\n\8\s\h\j\6\6\a\z\1\r\3\r\8\k\i\v\m\9\g\9\a\o\v\i\y\w\t\i\r\8\7\6\k\9\s\i\5\8\9\a\9\j\z\k\c\l\e\q\x\5\a\w\k\o\5\r\p\t\z\m\i\i\l\l\h\d\t\1\m\4\j\6\z\w\9\z\l\5\4\f\d\3\b\9\f\h\a\n\8\4\2\z\e\j\m\6\y\b\n\y\r\m\2\y\m\f\j\q\8\u\v\t\b\r\9\y\r\0\x\u\r\0\2\m\u\0\s\m\n\7\8\0\6\w\e\e\0\4\t\b\g\t\y ]] 00:06:50.004 15:54:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:50.004 15:54:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:50.262 [2024-11-20 15:54:48.288033] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:50.262 [2024-11-20 15:54:48.288181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60553 ] 00:06:50.262 [2024-11-20 15:54:48.431891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.262 [2024-11-20 15:54:48.498128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.521 [2024-11-20 15:54:48.556502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.521  [2024-11-20T15:54:49.029Z] Copying: 512/512 [B] (average 500 kBps) 00:06:50.779 00:06:50.779 15:54:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kir14isg7cet6d2865od1d8rmy9kb4ngrwqgymkwgjqlp67gnn03n54z31dmmdlsdyl8ykdag4m84gubxbdp5s99yu5z378nnpbkrjh0q4xztsr2ytvnuw6je8p0d4rzak61zfgs43708wt3u60jdokqtdz61vrkvskkspszjoneeb1a462ib0ind8xxo6hv5wlszs9z9cl9yzpalrm52o8c7irtqsbra7svxzwz58fc73v2mx6ztpth5e8z9dsv3r3veparpxdgr88gzm6szn89famnz5ct54ch49rg6t5msga0d3lyf4ynbi20ajd5h966x5zbwgvhmmell1yzeffm36r2u3x3z5tvz2zv4k9ysqe7n8shj66az1r3r8kivm9g9aoviywtir876k9si589a9jzkcleqx5awko5rptzmiillhdt1m4j6zw9zl54fd3b9fhan842zejm6ybnyrm2ymfjq8uvtbr9yr0xur02mu0smn7806wee04tbgty == \k\i\r\1\4\i\s\g\7\c\e\t\6\d\2\8\6\5\o\d\1\d\8\r\m\y\9\k\b\4\n\g\r\w\q\g\y\m\k\w\g\j\q\l\p\6\7\g\n\n\0\3\n\5\4\z\3\1\d\m\m\d\l\s\d\y\l\8\y\k\d\a\g\4\m\8\4\g\u\b\x\b\d\p\5\s\9\9\y\u\5\z\3\7\8\n\n\p\b\k\r\j\h\0\q\4\x\z\t\s\r\2\y\t\v\n\u\w\6\j\e\8\p\0\d\4\r\z\a\k\6\1\z\f\g\s\4\3\7\0\8\w\t\3\u\6\0\j\d\o\k\q\t\d\z\6\1\v\r\k\v\s\k\k\s\p\s\z\j\o\n\e\e\b\1\a\4\6\2\i\b\0\i\n\d\8\x\x\o\6\h\v\5\w\l\s\z\s\9\z\9\c\l\9\y\z\p\a\l\r\m\5\2\o\8\c\7\i\r\t\q\s\b\r\a\7\s\v\x\z\w\z\5\8\f\c\7\3\v\2\m\x\6\z\t\p\t\h\5\e\8\z\9\d\s\v\3\r\3\v\e\p\a\r\p\x\d\g\r\8\8\g\z\m\6\s\z\n\8\9\f\a\m\n\z\5\c\t\5\4\c\h\4\9\r\g\6\t\5\m\s\g\a\0\d\3\l\y\f\4\y\n\b\i\2\0\a\j\d\5\h\9\6\6\x\5\z\b\w\g\v\h\m\m\e\l\l\1\y\z\e\f\f\m\3\6\r\2\u\3\x\3\z\5\t\v\z\2\z\v\4\k\9\y\s\q\e\7\n\8\s\h\j\6\6\a\z\1\r\3\r\8\k\i\v\m\9\g\9\a\o\v\i\y\w\t\i\r\8\7\6\k\9\s\i\5\8\9\a\9\j\z\k\c\l\e\q\x\5\a\w\k\o\5\r\p\t\z\m\i\i\l\l\h\d\t\1\m\4\j\6\z\w\9\z\l\5\4\f\d\3\b\9\f\h\a\n\8\4\2\z\e\j\m\6\y\b\n\y\r\m\2\y\m\f\j\q\8\u\v\t\b\r\9\y\r\0\x\u\r\0\2\m\u\0\s\m\n\7\8\0\6\w\e\e\0\4\t\b\g\t\y ]] 00:06:50.779 15:54:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:50.779 15:54:48 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:50.779 [2024-11-20 15:54:48.848962] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:50.779 [2024-11-20 15:54:48.849082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60563 ] 00:06:50.779 [2024-11-20 15:54:49.001623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.037 [2024-11-20 15:54:49.068701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.037 [2024-11-20 15:54:49.127477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.037  [2024-11-20T15:54:49.546Z] Copying: 512/512 [B] (average 500 kBps) 00:06:51.296 00:06:51.296 15:54:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kir14isg7cet6d2865od1d8rmy9kb4ngrwqgymkwgjqlp67gnn03n54z31dmmdlsdyl8ykdag4m84gubxbdp5s99yu5z378nnpbkrjh0q4xztsr2ytvnuw6je8p0d4rzak61zfgs43708wt3u60jdokqtdz61vrkvskkspszjoneeb1a462ib0ind8xxo6hv5wlszs9z9cl9yzpalrm52o8c7irtqsbra7svxzwz58fc73v2mx6ztpth5e8z9dsv3r3veparpxdgr88gzm6szn89famnz5ct54ch49rg6t5msga0d3lyf4ynbi20ajd5h966x5zbwgvhmmell1yzeffm36r2u3x3z5tvz2zv4k9ysqe7n8shj66az1r3r8kivm9g9aoviywtir876k9si589a9jzkcleqx5awko5rptzmiillhdt1m4j6zw9zl54fd3b9fhan842zejm6ybnyrm2ymfjq8uvtbr9yr0xur02mu0smn7806wee04tbgty == \k\i\r\1\4\i\s\g\7\c\e\t\6\d\2\8\6\5\o\d\1\d\8\r\m\y\9\k\b\4\n\g\r\w\q\g\y\m\k\w\g\j\q\l\p\6\7\g\n\n\0\3\n\5\4\z\3\1\d\m\m\d\l\s\d\y\l\8\y\k\d\a\g\4\m\8\4\g\u\b\x\b\d\p\5\s\9\9\y\u\5\z\3\7\8\n\n\p\b\k\r\j\h\0\q\4\x\z\t\s\r\2\y\t\v\n\u\w\6\j\e\8\p\0\d\4\r\z\a\k\6\1\z\f\g\s\4\3\7\0\8\w\t\3\u\6\0\j\d\o\k\q\t\d\z\6\1\v\r\k\v\s\k\k\s\p\s\z\j\o\n\e\e\b\1\a\4\6\2\i\b\0\i\n\d\8\x\x\o\6\h\v\5\w\l\s\z\s\9\z\9\c\l\9\y\z\p\a\l\r\m\5\2\o\8\c\7\i\r\t\q\s\b\r\a\7\s\v\x\z\w\z\5\8\f\c\7\3\v\2\m\x\6\z\t\p\t\h\5\e\8\z\9\d\s\v\3\r\3\v\e\p\a\r\p\x\d\g\r\8\8\g\z\m\6\s\z\n\8\9\f\a\m\n\z\5\c\t\5\4\c\h\4\9\r\g\6\t\5\m\s\g\a\0\d\3\l\y\f\4\y\n\b\i\2\0\a\j\d\5\h\9\6\6\x\5\z\b\w\g\v\h\m\m\e\l\l\1\y\z\e\f\f\m\3\6\r\2\u\3\x\3\z\5\t\v\z\2\z\v\4\k\9\y\s\q\e\7\n\8\s\h\j\6\6\a\z\1\r\3\r\8\k\i\v\m\9\g\9\a\o\v\i\y\w\t\i\r\8\7\6\k\9\s\i\5\8\9\a\9\j\z\k\c\l\e\q\x\5\a\w\k\o\5\r\p\t\z\m\i\i\l\l\h\d\t\1\m\4\j\6\z\w\9\z\l\5\4\f\d\3\b\9\f\h\a\n\8\4\2\z\e\j\m\6\y\b\n\y\r\m\2\y\m\f\j\q\8\u\v\t\b\r\9\y\r\0\x\u\r\0\2\m\u\0\s\m\n\7\8\0\6\w\e\e\0\4\t\b\g\t\y ]] 00:06:51.296 15:54:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:51.296 15:54:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:51.296 [2024-11-20 15:54:49.424885] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:51.296 [2024-11-20 15:54:49.424993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60572 ] 00:06:51.554 [2024-11-20 15:54:49.572237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.554 [2024-11-20 15:54:49.634924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.554 [2024-11-20 15:54:49.690520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.554  [2024-11-20T15:54:50.062Z] Copying: 512/512 [B] (average 166 kBps) 00:06:51.812 00:06:51.812 15:54:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kir14isg7cet6d2865od1d8rmy9kb4ngrwqgymkwgjqlp67gnn03n54z31dmmdlsdyl8ykdag4m84gubxbdp5s99yu5z378nnpbkrjh0q4xztsr2ytvnuw6je8p0d4rzak61zfgs43708wt3u60jdokqtdz61vrkvskkspszjoneeb1a462ib0ind8xxo6hv5wlszs9z9cl9yzpalrm52o8c7irtqsbra7svxzwz58fc73v2mx6ztpth5e8z9dsv3r3veparpxdgr88gzm6szn89famnz5ct54ch49rg6t5msga0d3lyf4ynbi20ajd5h966x5zbwgvhmmell1yzeffm36r2u3x3z5tvz2zv4k9ysqe7n8shj66az1r3r8kivm9g9aoviywtir876k9si589a9jzkcleqx5awko5rptzmiillhdt1m4j6zw9zl54fd3b9fhan842zejm6ybnyrm2ymfjq8uvtbr9yr0xur02mu0smn7806wee04tbgty == \k\i\r\1\4\i\s\g\7\c\e\t\6\d\2\8\6\5\o\d\1\d\8\r\m\y\9\k\b\4\n\g\r\w\q\g\y\m\k\w\g\j\q\l\p\6\7\g\n\n\0\3\n\5\4\z\3\1\d\m\m\d\l\s\d\y\l\8\y\k\d\a\g\4\m\8\4\g\u\b\x\b\d\p\5\s\9\9\y\u\5\z\3\7\8\n\n\p\b\k\r\j\h\0\q\4\x\z\t\s\r\2\y\t\v\n\u\w\6\j\e\8\p\0\d\4\r\z\a\k\6\1\z\f\g\s\4\3\7\0\8\w\t\3\u\6\0\j\d\o\k\q\t\d\z\6\1\v\r\k\v\s\k\k\s\p\s\z\j\o\n\e\e\b\1\a\4\6\2\i\b\0\i\n\d\8\x\x\o\6\h\v\5\w\l\s\z\s\9\z\9\c\l\9\y\z\p\a\l\r\m\5\2\o\8\c\7\i\r\t\q\s\b\r\a\7\s\v\x\z\w\z\5\8\f\c\7\3\v\2\m\x\6\z\t\p\t\h\5\e\8\z\9\d\s\v\3\r\3\v\e\p\a\r\p\x\d\g\r\8\8\g\z\m\6\s\z\n\8\9\f\a\m\n\z\5\c\t\5\4\c\h\4\9\r\g\6\t\5\m\s\g\a\0\d\3\l\y\f\4\y\n\b\i\2\0\a\j\d\5\h\9\6\6\x\5\z\b\w\g\v\h\m\m\e\l\l\1\y\z\e\f\f\m\3\6\r\2\u\3\x\3\z\5\t\v\z\2\z\v\4\k\9\y\s\q\e\7\n\8\s\h\j\6\6\a\z\1\r\3\r\8\k\i\v\m\9\g\9\a\o\v\i\y\w\t\i\r\8\7\6\k\9\s\i\5\8\9\a\9\j\z\k\c\l\e\q\x\5\a\w\k\o\5\r\p\t\z\m\i\i\l\l\h\d\t\1\m\4\j\6\z\w\9\z\l\5\4\f\d\3\b\9\f\h\a\n\8\4\2\z\e\j\m\6\y\b\n\y\r\m\2\y\m\f\j\q\8\u\v\t\b\r\9\y\r\0\x\u\r\0\2\m\u\0\s\m\n\7\8\0\6\w\e\e\0\4\t\b\g\t\y ]] 00:06:51.812 15:54:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:51.812 15:54:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:51.812 15:54:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:51.812 15:54:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:51.812 15:54:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:51.812 15:54:49 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:51.812 [2024-11-20 15:54:49.994828] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:51.812 [2024-11-20 15:54:49.994935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60582 ] 00:06:52.071 [2024-11-20 15:54:50.140933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.071 [2024-11-20 15:54:50.220604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.071 [2024-11-20 15:54:50.281578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.381  [2024-11-20T15:54:50.631Z] Copying: 512/512 [B] (average 500 kBps) 00:06:52.381 00:06:52.381 15:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hzmar6q6bbld647pxe6guktd6r4868l5b4ztckkvc6h9m2ebc66sy2tzw2t8u1dth3gvhmanqeiprpocp0tl40xq1v83wf336v4a85a2fqh367xqpbfn9tn6yaqa9hbwumvqd8yy6nrx6lybo6cufw1b95glxuw1gm20wfhvcv6tcopboj2n3pjbqdnvieqyt9tlm85pmz1zfl5nm2dr3hsksft6zttu7e418ekdjvz2mniv75z1ljaqma1pv4hkn890vbq334626sfne5lbnxid94vk30e6goyjgg8nfz374w5as7dw3u8nxb3djsevau4yu1hxbpiy11xmrmidajtqqil6vgtmzrfzy5jqp31iqdboei99654minrez79q6592wktwh86ftim981hiouqx9gkgg18fuq5yuiu38t7sy66a41zkjjwnexeznouqd0y3ndzrj8rjifopxzfg8vfhllkblgtien5ecv9oqn6ok6u2rfzlt0760bul7rso == \h\z\m\a\r\6\q\6\b\b\l\d\6\4\7\p\x\e\6\g\u\k\t\d\6\r\4\8\6\8\l\5\b\4\z\t\c\k\k\v\c\6\h\9\m\2\e\b\c\6\6\s\y\2\t\z\w\2\t\8\u\1\d\t\h\3\g\v\h\m\a\n\q\e\i\p\r\p\o\c\p\0\t\l\4\0\x\q\1\v\8\3\w\f\3\3\6\v\4\a\8\5\a\2\f\q\h\3\6\7\x\q\p\b\f\n\9\t\n\6\y\a\q\a\9\h\b\w\u\m\v\q\d\8\y\y\6\n\r\x\6\l\y\b\o\6\c\u\f\w\1\b\9\5\g\l\x\u\w\1\g\m\2\0\w\f\h\v\c\v\6\t\c\o\p\b\o\j\2\n\3\p\j\b\q\d\n\v\i\e\q\y\t\9\t\l\m\8\5\p\m\z\1\z\f\l\5\n\m\2\d\r\3\h\s\k\s\f\t\6\z\t\t\u\7\e\4\1\8\e\k\d\j\v\z\2\m\n\i\v\7\5\z\1\l\j\a\q\m\a\1\p\v\4\h\k\n\8\9\0\v\b\q\3\3\4\6\2\6\s\f\n\e\5\l\b\n\x\i\d\9\4\v\k\3\0\e\6\g\o\y\j\g\g\8\n\f\z\3\7\4\w\5\a\s\7\d\w\3\u\8\n\x\b\3\d\j\s\e\v\a\u\4\y\u\1\h\x\b\p\i\y\1\1\x\m\r\m\i\d\a\j\t\q\q\i\l\6\v\g\t\m\z\r\f\z\y\5\j\q\p\3\1\i\q\d\b\o\e\i\9\9\6\5\4\m\i\n\r\e\z\7\9\q\6\5\9\2\w\k\t\w\h\8\6\f\t\i\m\9\8\1\h\i\o\u\q\x\9\g\k\g\g\1\8\f\u\q\5\y\u\i\u\3\8\t\7\s\y\6\6\a\4\1\z\k\j\j\w\n\e\x\e\z\n\o\u\q\d\0\y\3\n\d\z\r\j\8\r\j\i\f\o\p\x\z\f\g\8\v\f\h\l\l\k\b\l\g\t\i\e\n\5\e\c\v\9\o\q\n\6\o\k\6\u\2\r\f\z\l\t\0\7\6\0\b\u\l\7\r\s\o ]] 00:06:52.381 15:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:52.381 15:54:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:52.381 [2024-11-20 15:54:50.574317] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:52.381 [2024-11-20 15:54:50.574419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60591 ] 00:06:52.638 [2024-11-20 15:54:50.717326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.638 [2024-11-20 15:54:50.783391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.638 [2024-11-20 15:54:50.839569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.638  [2024-11-20T15:54:51.147Z] Copying: 512/512 [B] (average 500 kBps) 00:06:52.897 00:06:52.897 15:54:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hzmar6q6bbld647pxe6guktd6r4868l5b4ztckkvc6h9m2ebc66sy2tzw2t8u1dth3gvhmanqeiprpocp0tl40xq1v83wf336v4a85a2fqh367xqpbfn9tn6yaqa9hbwumvqd8yy6nrx6lybo6cufw1b95glxuw1gm20wfhvcv6tcopboj2n3pjbqdnvieqyt9tlm85pmz1zfl5nm2dr3hsksft6zttu7e418ekdjvz2mniv75z1ljaqma1pv4hkn890vbq334626sfne5lbnxid94vk30e6goyjgg8nfz374w5as7dw3u8nxb3djsevau4yu1hxbpiy11xmrmidajtqqil6vgtmzrfzy5jqp31iqdboei99654minrez79q6592wktwh86ftim981hiouqx9gkgg18fuq5yuiu38t7sy66a41zkjjwnexeznouqd0y3ndzrj8rjifopxzfg8vfhllkblgtien5ecv9oqn6ok6u2rfzlt0760bul7rso == \h\z\m\a\r\6\q\6\b\b\l\d\6\4\7\p\x\e\6\g\u\k\t\d\6\r\4\8\6\8\l\5\b\4\z\t\c\k\k\v\c\6\h\9\m\2\e\b\c\6\6\s\y\2\t\z\w\2\t\8\u\1\d\t\h\3\g\v\h\m\a\n\q\e\i\p\r\p\o\c\p\0\t\l\4\0\x\q\1\v\8\3\w\f\3\3\6\v\4\a\8\5\a\2\f\q\h\3\6\7\x\q\p\b\f\n\9\t\n\6\y\a\q\a\9\h\b\w\u\m\v\q\d\8\y\y\6\n\r\x\6\l\y\b\o\6\c\u\f\w\1\b\9\5\g\l\x\u\w\1\g\m\2\0\w\f\h\v\c\v\6\t\c\o\p\b\o\j\2\n\3\p\j\b\q\d\n\v\i\e\q\y\t\9\t\l\m\8\5\p\m\z\1\z\f\l\5\n\m\2\d\r\3\h\s\k\s\f\t\6\z\t\t\u\7\e\4\1\8\e\k\d\j\v\z\2\m\n\i\v\7\5\z\1\l\j\a\q\m\a\1\p\v\4\h\k\n\8\9\0\v\b\q\3\3\4\6\2\6\s\f\n\e\5\l\b\n\x\i\d\9\4\v\k\3\0\e\6\g\o\y\j\g\g\8\n\f\z\3\7\4\w\5\a\s\7\d\w\3\u\8\n\x\b\3\d\j\s\e\v\a\u\4\y\u\1\h\x\b\p\i\y\1\1\x\m\r\m\i\d\a\j\t\q\q\i\l\6\v\g\t\m\z\r\f\z\y\5\j\q\p\3\1\i\q\d\b\o\e\i\9\9\6\5\4\m\i\n\r\e\z\7\9\q\6\5\9\2\w\k\t\w\h\8\6\f\t\i\m\9\8\1\h\i\o\u\q\x\9\g\k\g\g\1\8\f\u\q\5\y\u\i\u\3\8\t\7\s\y\6\6\a\4\1\z\k\j\j\w\n\e\x\e\z\n\o\u\q\d\0\y\3\n\d\z\r\j\8\r\j\i\f\o\p\x\z\f\g\8\v\f\h\l\l\k\b\l\g\t\i\e\n\5\e\c\v\9\o\q\n\6\o\k\6\u\2\r\f\z\l\t\0\7\6\0\b\u\l\7\r\s\o ]] 00:06:52.897 15:54:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:52.897 15:54:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:52.897 [2024-11-20 15:54:51.110273] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:52.897 [2024-11-20 15:54:51.110374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60601 ] 00:06:53.155 [2024-11-20 15:54:51.252052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.155 [2024-11-20 15:54:51.315054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.155 [2024-11-20 15:54:51.371962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.414  [2024-11-20T15:54:51.664Z] Copying: 512/512 [B] (average 250 kBps) 00:06:53.414 00:06:53.414 15:54:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hzmar6q6bbld647pxe6guktd6r4868l5b4ztckkvc6h9m2ebc66sy2tzw2t8u1dth3gvhmanqeiprpocp0tl40xq1v83wf336v4a85a2fqh367xqpbfn9tn6yaqa9hbwumvqd8yy6nrx6lybo6cufw1b95glxuw1gm20wfhvcv6tcopboj2n3pjbqdnvieqyt9tlm85pmz1zfl5nm2dr3hsksft6zttu7e418ekdjvz2mniv75z1ljaqma1pv4hkn890vbq334626sfne5lbnxid94vk30e6goyjgg8nfz374w5as7dw3u8nxb3djsevau4yu1hxbpiy11xmrmidajtqqil6vgtmzrfzy5jqp31iqdboei99654minrez79q6592wktwh86ftim981hiouqx9gkgg18fuq5yuiu38t7sy66a41zkjjwnexeznouqd0y3ndzrj8rjifopxzfg8vfhllkblgtien5ecv9oqn6ok6u2rfzlt0760bul7rso == \h\z\m\a\r\6\q\6\b\b\l\d\6\4\7\p\x\e\6\g\u\k\t\d\6\r\4\8\6\8\l\5\b\4\z\t\c\k\k\v\c\6\h\9\m\2\e\b\c\6\6\s\y\2\t\z\w\2\t\8\u\1\d\t\h\3\g\v\h\m\a\n\q\e\i\p\r\p\o\c\p\0\t\l\4\0\x\q\1\v\8\3\w\f\3\3\6\v\4\a\8\5\a\2\f\q\h\3\6\7\x\q\p\b\f\n\9\t\n\6\y\a\q\a\9\h\b\w\u\m\v\q\d\8\y\y\6\n\r\x\6\l\y\b\o\6\c\u\f\w\1\b\9\5\g\l\x\u\w\1\g\m\2\0\w\f\h\v\c\v\6\t\c\o\p\b\o\j\2\n\3\p\j\b\q\d\n\v\i\e\q\y\t\9\t\l\m\8\5\p\m\z\1\z\f\l\5\n\m\2\d\r\3\h\s\k\s\f\t\6\z\t\t\u\7\e\4\1\8\e\k\d\j\v\z\2\m\n\i\v\7\5\z\1\l\j\a\q\m\a\1\p\v\4\h\k\n\8\9\0\v\b\q\3\3\4\6\2\6\s\f\n\e\5\l\b\n\x\i\d\9\4\v\k\3\0\e\6\g\o\y\j\g\g\8\n\f\z\3\7\4\w\5\a\s\7\d\w\3\u\8\n\x\b\3\d\j\s\e\v\a\u\4\y\u\1\h\x\b\p\i\y\1\1\x\m\r\m\i\d\a\j\t\q\q\i\l\6\v\g\t\m\z\r\f\z\y\5\j\q\p\3\1\i\q\d\b\o\e\i\9\9\6\5\4\m\i\n\r\e\z\7\9\q\6\5\9\2\w\k\t\w\h\8\6\f\t\i\m\9\8\1\h\i\o\u\q\x\9\g\k\g\g\1\8\f\u\q\5\y\u\i\u\3\8\t\7\s\y\6\6\a\4\1\z\k\j\j\w\n\e\x\e\z\n\o\u\q\d\0\y\3\n\d\z\r\j\8\r\j\i\f\o\p\x\z\f\g\8\v\f\h\l\l\k\b\l\g\t\i\e\n\5\e\c\v\9\o\q\n\6\o\k\6\u\2\r\f\z\l\t\0\7\6\0\b\u\l\7\r\s\o ]] 00:06:53.414 15:54:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:53.414 15:54:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:53.672 [2024-11-20 15:54:51.665113] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:53.672 [2024-11-20 15:54:51.665309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60610 ] 00:06:53.672 [2024-11-20 15:54:51.814169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.672 [2024-11-20 15:54:51.877355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.930 [2024-11-20 15:54:51.931632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.930  [2024-11-20T15:54:52.180Z] Copying: 512/512 [B] (average 500 kBps) 00:06:53.930 00:06:53.930 15:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hzmar6q6bbld647pxe6guktd6r4868l5b4ztckkvc6h9m2ebc66sy2tzw2t8u1dth3gvhmanqeiprpocp0tl40xq1v83wf336v4a85a2fqh367xqpbfn9tn6yaqa9hbwumvqd8yy6nrx6lybo6cufw1b95glxuw1gm20wfhvcv6tcopboj2n3pjbqdnvieqyt9tlm85pmz1zfl5nm2dr3hsksft6zttu7e418ekdjvz2mniv75z1ljaqma1pv4hkn890vbq334626sfne5lbnxid94vk30e6goyjgg8nfz374w5as7dw3u8nxb3djsevau4yu1hxbpiy11xmrmidajtqqil6vgtmzrfzy5jqp31iqdboei99654minrez79q6592wktwh86ftim981hiouqx9gkgg18fuq5yuiu38t7sy66a41zkjjwnexeznouqd0y3ndzrj8rjifopxzfg8vfhllkblgtien5ecv9oqn6ok6u2rfzlt0760bul7rso == \h\z\m\a\r\6\q\6\b\b\l\d\6\4\7\p\x\e\6\g\u\k\t\d\6\r\4\8\6\8\l\5\b\4\z\t\c\k\k\v\c\6\h\9\m\2\e\b\c\6\6\s\y\2\t\z\w\2\t\8\u\1\d\t\h\3\g\v\h\m\a\n\q\e\i\p\r\p\o\c\p\0\t\l\4\0\x\q\1\v\8\3\w\f\3\3\6\v\4\a\8\5\a\2\f\q\h\3\6\7\x\q\p\b\f\n\9\t\n\6\y\a\q\a\9\h\b\w\u\m\v\q\d\8\y\y\6\n\r\x\6\l\y\b\o\6\c\u\f\w\1\b\9\5\g\l\x\u\w\1\g\m\2\0\w\f\h\v\c\v\6\t\c\o\p\b\o\j\2\n\3\p\j\b\q\d\n\v\i\e\q\y\t\9\t\l\m\8\5\p\m\z\1\z\f\l\5\n\m\2\d\r\3\h\s\k\s\f\t\6\z\t\t\u\7\e\4\1\8\e\k\d\j\v\z\2\m\n\i\v\7\5\z\1\l\j\a\q\m\a\1\p\v\4\h\k\n\8\9\0\v\b\q\3\3\4\6\2\6\s\f\n\e\5\l\b\n\x\i\d\9\4\v\k\3\0\e\6\g\o\y\j\g\g\8\n\f\z\3\7\4\w\5\a\s\7\d\w\3\u\8\n\x\b\3\d\j\s\e\v\a\u\4\y\u\1\h\x\b\p\i\y\1\1\x\m\r\m\i\d\a\j\t\q\q\i\l\6\v\g\t\m\z\r\f\z\y\5\j\q\p\3\1\i\q\d\b\o\e\i\9\9\6\5\4\m\i\n\r\e\z\7\9\q\6\5\9\2\w\k\t\w\h\8\6\f\t\i\m\9\8\1\h\i\o\u\q\x\9\g\k\g\g\1\8\f\u\q\5\y\u\i\u\3\8\t\7\s\y\6\6\a\4\1\z\k\j\j\w\n\e\x\e\z\n\o\u\q\d\0\y\3\n\d\z\r\j\8\r\j\i\f\o\p\x\z\f\g\8\v\f\h\l\l\k\b\l\g\t\i\e\n\5\e\c\v\9\o\q\n\6\o\k\6\u\2\r\f\z\l\t\0\7\6\0\b\u\l\7\r\s\o ]] 00:06:53.930 00:06:53.930 real 0m4.497s 00:06:53.930 user 0m2.482s 00:06:53.930 sys 0m2.260s 00:06:53.930 15:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.930 ************************************ 00:06:53.930 END TEST dd_flags_misc 00:06:53.930 15:54:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:53.930 ************************************ 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:54.188 * Second test run, disabling liburing, forcing AIO 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:54.188 ************************************ 00:06:54.188 START TEST dd_flag_append_forced_aio 00:06:54.188 ************************************ 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=kwb6y3hedrvxcemooygx9dt3bm0tuzym 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=pkng7ai75l4ya5nm97j5ka06utin4v1d 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s kwb6y3hedrvxcemooygx9dt3bm0tuzym 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s pkng7ai75l4ya5nm97j5ka06utin4v1d 00:06:54.188 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:54.188 [2024-11-20 15:54:52.272444] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:54.188 [2024-11-20 15:54:52.272544] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60639 ] 00:06:54.188 [2024-11-20 15:54:52.415060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.445 [2024-11-20 15:54:52.480601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.445 [2024-11-20 15:54:52.534618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.445  [2024-11-20T15:54:52.953Z] Copying: 32/32 [B] (average 31 kBps) 00:06:54.703 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ pkng7ai75l4ya5nm97j5ka06utin4v1dkwb6y3hedrvxcemooygx9dt3bm0tuzym == \p\k\n\g\7\a\i\7\5\l\4\y\a\5\n\m\9\7\j\5\k\a\0\6\u\t\i\n\4\v\1\d\k\w\b\6\y\3\h\e\d\r\v\x\c\e\m\o\o\y\g\x\9\d\t\3\b\m\0\t\u\z\y\m ]] 00:06:54.703 00:06:54.703 real 0m0.559s 00:06:54.703 user 0m0.310s 00:06:54.703 sys 0m0.130s 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:54.703 ************************************ 00:06:54.703 END TEST dd_flag_append_forced_aio 00:06:54.703 ************************************ 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:54.703 ************************************ 00:06:54.703 START TEST dd_flag_directory_forced_aio 00:06:54.703 ************************************ 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.703 15:54:52 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:54.703 [2024-11-20 15:54:52.889377] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:54.703 [2024-11-20 15:54:52.889492] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60671 ] 00:06:54.961 [2024-11-20 15:54:53.036451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.961 [2024-11-20 15:54:53.102281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.961 [2024-11-20 15:54:53.159140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.961 [2024-11-20 15:54:53.200241] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:54.961 [2024-11-20 15:54:53.200542] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:54.961 [2024-11-20 15:54:53.200569] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.220 [2024-11-20 15:54:53.326015] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:55.220 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:55.477 [2024-11-20 15:54:53.468483] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:55.477 [2024-11-20 15:54:53.468666] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60675 ] 00:06:55.477 [2024-11-20 15:54:53.616039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.477 [2024-11-20 15:54:53.681646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.735 [2024-11-20 15:54:53.737395] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.735 [2024-11-20 15:54:53.777754] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:55.735 [2024-11-20 15:54:53.777839] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:55.735 [2024-11-20 15:54:53.777862] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.735 [2024-11-20 15:54:53.901190] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:55.735 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:06:55.735 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.735 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:06:55.735 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:55.735 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:55.735 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.735 00:06:55.735 real 0m1.139s 00:06:55.735 user 0m0.630s 00:06:55.735 sys 0m0.294s 00:06:55.735 ************************************ 00:06:55.735 END TEST dd_flag_directory_forced_aio 00:06:55.735 ************************************ 00:06:55.735 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.735 15:54:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:55.993 15:54:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:55.993 15:54:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.993 15:54:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.993 15:54:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:55.993 ************************************ 00:06:55.993 START TEST dd_flag_nofollow_forced_aio 00:06:55.993 ************************************ 00:06:55.993 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:06:55.993 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:55.993 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:55.993 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:55.994 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.994 [2024-11-20 15:54:54.080007] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:55.994 [2024-11-20 15:54:54.080123] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60709 ] 00:06:55.994 [2024-11-20 15:54:54.231983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.251 [2024-11-20 15:54:54.308573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.251 [2024-11-20 15:54:54.368141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.251 [2024-11-20 15:54:54.411963] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:56.251 [2024-11-20 15:54:54.412038] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:56.251 [2024-11-20 15:54:54.412068] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.509 [2024-11-20 15:54:54.541868] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:56.509 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:56.509 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.509 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:56.509 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:56.510 15:54:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:56.510 [2024-11-20 15:54:54.676328] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:56.510 [2024-11-20 15:54:54.676447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60718 ] 00:06:56.767 [2024-11-20 15:54:54.833261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.767 [2024-11-20 15:54:54.910142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.767 [2024-11-20 15:54:54.971860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.025 [2024-11-20 15:54:55.017788] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:57.025 [2024-11-20 15:54:55.017871] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:57.025 [2024-11-20 15:54:55.017897] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:57.025 [2024-11-20 15:54:55.153683] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:57.025 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:06:57.025 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:57.025 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:06:57.025 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:06:57.025 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:06:57.025 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:57.025 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:57.025 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:57.025 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:57.025 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.282 [2024-11-20 15:54:55.304192] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:57.282 [2024-11-20 15:54:55.304600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60726 ] 00:06:57.282 [2024-11-20 15:54:55.459447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.540 [2024-11-20 15:54:55.531464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.540 [2024-11-20 15:54:55.590546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:57.540  [2024-11-20T15:54:56.046Z] Copying: 512/512 [B] (average 500 kBps) 00:06:57.796 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ lzhmvljp7k9s68vssms4z026f1r10ss3w3z45o4gl4o65360wpbiqk60wwn73r9qsus3h1tzknrs51cwgr39r89r8v9vq06t75tam9g2pzyd2wevvyrwo1nixbjgegp5coz1w7akgi9nzkzopnxty9uthf6a9j20p7yityzblwtyclhssgv3a60df69mzl8pjsa9kozxdqe4z0jxskwdbht5frozx503o0e9q2b2liihdsxdxfj0f65d6u6iy2ehnntpeegosuure8cwtui99nsaov487mdtui40dzd9ev0ypr3c03o63tktcc69hsidao1xhl877o6z1oaok6hpp2djdquuh71v0pf01hknxknj655rgoi1xdzfjd35b6qkr4it4o419ziu8idyjcg5o90huvsa9hhbpw0pc32qa4ks6ue5c8j2s4xwp2783a3yi7njbnrho44te0yk14p962ltn6k4rwxtvore3bjrwauprcfowzq563hikphvgkr8 == \l\z\h\m\v\l\j\p\7\k\9\s\6\8\v\s\s\m\s\4\z\0\2\6\f\1\r\1\0\s\s\3\w\3\z\4\5\o\4\g\l\4\o\6\5\3\6\0\w\p\b\i\q\k\6\0\w\w\n\7\3\r\9\q\s\u\s\3\h\1\t\z\k\n\r\s\5\1\c\w\g\r\3\9\r\8\9\r\8\v\9\v\q\0\6\t\7\5\t\a\m\9\g\2\p\z\y\d\2\w\e\v\v\y\r\w\o\1\n\i\x\b\j\g\e\g\p\5\c\o\z\1\w\7\a\k\g\i\9\n\z\k\z\o\p\n\x\t\y\9\u\t\h\f\6\a\9\j\2\0\p\7\y\i\t\y\z\b\l\w\t\y\c\l\h\s\s\g\v\3\a\6\0\d\f\6\9\m\z\l\8\p\j\s\a\9\k\o\z\x\d\q\e\4\z\0\j\x\s\k\w\d\b\h\t\5\f\r\o\z\x\5\0\3\o\0\e\9\q\2\b\2\l\i\i\h\d\s\x\d\x\f\j\0\f\6\5\d\6\u\6\i\y\2\e\h\n\n\t\p\e\e\g\o\s\u\u\r\e\8\c\w\t\u\i\9\9\n\s\a\o\v\4\8\7\m\d\t\u\i\4\0\d\z\d\9\e\v\0\y\p\r\3\c\0\3\o\6\3\t\k\t\c\c\6\9\h\s\i\d\a\o\1\x\h\l\8\7\7\o\6\z\1\o\a\o\k\6\h\p\p\2\d\j\d\q\u\u\h\7\1\v\0\p\f\0\1\h\k\n\x\k\n\j\6\5\5\r\g\o\i\1\x\d\z\f\j\d\3\5\b\6\q\k\r\4\i\t\4\o\4\1\9\z\i\u\8\i\d\y\j\c\g\5\o\9\0\h\u\v\s\a\9\h\h\b\p\w\0\p\c\3\2\q\a\4\k\s\6\u\e\5\c\8\j\2\s\4\x\w\p\2\7\8\3\a\3\y\i\7\n\j\b\n\r\h\o\4\4\t\e\0\y\k\1\4\p\9\6\2\l\t\n\6\k\4\r\w\x\t\v\o\r\e\3\b\j\r\w\a\u\p\r\c\f\o\w\z\q\5\6\3\h\i\k\p\h\v\g\k\r\8 ]] 00:06:57.796 00:06:57.796 real 0m1.844s 00:06:57.796 user 0m1.048s 00:06:57.796 sys 0m0.459s 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:57.796 ************************************ 00:06:57.796 END TEST dd_flag_nofollow_forced_aio 00:06:57.796 ************************************ 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:57.796 ************************************ 00:06:57.796 START TEST dd_flag_noatime_forced_aio 00:06:57.796 ************************************ 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732118095 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732118095 00:06:57.796 15:54:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:58.736 15:54:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:58.736 [2024-11-20 15:54:56.982306] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:58.736 [2024-11-20 15:54:56.982413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60774 ] 00:06:58.994 [2024-11-20 15:54:57.133022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.994 [2024-11-20 15:54:57.204019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.251 [2024-11-20 15:54:57.262119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.252  [2024-11-20T15:54:57.760Z] Copying: 512/512 [B] (average 500 kBps) 00:06:59.510 00:06:59.510 15:54:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:59.510 15:54:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732118095 )) 00:06:59.510 15:54:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.510 15:54:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732118095 )) 00:06:59.510 15:54:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:59.510 [2024-11-20 15:54:57.570394] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:06:59.510 [2024-11-20 15:54:57.570503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60780 ] 00:06:59.510 [2024-11-20 15:54:57.720626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.768 [2024-11-20 15:54:57.789773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.768 [2024-11-20 15:54:57.846324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.768  [2024-11-20T15:54:58.275Z] Copying: 512/512 [B] (average 500 kBps) 00:07:00.025 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732118097 )) 00:07:00.025 00:07:00.025 real 0m2.192s 00:07:00.025 user 0m0.661s 00:07:00.025 sys 0m0.294s 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.025 ************************************ 00:07:00.025 END TEST dd_flag_noatime_forced_aio 00:07:00.025 ************************************ 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:00.025 ************************************ 00:07:00.025 START TEST dd_flags_misc_forced_aio 00:07:00.025 ************************************ 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:00.025 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:00.025 [2024-11-20 15:54:58.210015] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:00.025 [2024-11-20 15:54:58.210121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60812 ] 00:07:00.284 [2024-11-20 15:54:58.364784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.284 [2024-11-20 15:54:58.431515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.284 [2024-11-20 15:54:58.488095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.284  [2024-11-20T15:54:58.793Z] Copying: 512/512 [B] (average 500 kBps) 00:07:00.543 00:07:00.543 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0ufh1wjncpitijw47u1ctb7jh82wmjqb2t51d8pb357x2om75kn4khd171q8xe60ez3wknrf6pxv5zbx02dxvqelu84lg2yxg2naba2ozqjzgm1iotxk5s7d574ygbtidi41hpiaijo7f8ixihtnco4nnb9greb7ug8uibgxdzyx17z41hg50pqfrtpun2h3iao40mjgjcwgvi5irqvo9lwuoguqcga4ub6vg3uqg7uyzvxogaxefatgq6hgwoq04tew54wv2wokovpvj0ciue8o24dy6keb4vlmq05pnpu5y36el53x1t12lji0c9g39ehof60mnf1d0ccyypb5wws316szdtcihz85s4k4wij2pdv08jvt0mrmutzfsc9rid17v4f5z9kolic07vj0jfvc59gq8xn6kqlduliu4aizkh4cb6t8ynjhkcu2uodcacidf1jqs7n7k3bb79t2lumiv1muo1c9xrt3aa1r0mm5iqpr078i7bhmb8vc8x51 == \0\u\f\h\1\w\j\n\c\p\i\t\i\j\w\4\7\u\1\c\t\b\7\j\h\8\2\w\m\j\q\b\2\t\5\1\d\8\p\b\3\5\7\x\2\o\m\7\5\k\n\4\k\h\d\1\7\1\q\8\x\e\6\0\e\z\3\w\k\n\r\f\6\p\x\v\5\z\b\x\0\2\d\x\v\q\e\l\u\8\4\l\g\2\y\x\g\2\n\a\b\a\2\o\z\q\j\z\g\m\1\i\o\t\x\k\5\s\7\d\5\7\4\y\g\b\t\i\d\i\4\1\h\p\i\a\i\j\o\7\f\8\i\x\i\h\t\n\c\o\4\n\n\b\9\g\r\e\b\7\u\g\8\u\i\b\g\x\d\z\y\x\1\7\z\4\1\h\g\5\0\p\q\f\r\t\p\u\n\2\h\3\i\a\o\4\0\m\j\g\j\c\w\g\v\i\5\i\r\q\v\o\9\l\w\u\o\g\u\q\c\g\a\4\u\b\6\v\g\3\u\q\g\7\u\y\z\v\x\o\g\a\x\e\f\a\t\g\q\6\h\g\w\o\q\0\4\t\e\w\5\4\w\v\2\w\o\k\o\v\p\v\j\0\c\i\u\e\8\o\2\4\d\y\6\k\e\b\4\v\l\m\q\0\5\p\n\p\u\5\y\3\6\e\l\5\3\x\1\t\1\2\l\j\i\0\c\9\g\3\9\e\h\o\f\6\0\m\n\f\1\d\0\c\c\y\y\p\b\5\w\w\s\3\1\6\s\z\d\t\c\i\h\z\8\5\s\4\k\4\w\i\j\2\p\d\v\0\8\j\v\t\0\m\r\m\u\t\z\f\s\c\9\r\i\d\1\7\v\4\f\5\z\9\k\o\l\i\c\0\7\v\j\0\j\f\v\c\5\9\g\q\8\x\n\6\k\q\l\d\u\l\i\u\4\a\i\z\k\h\4\c\b\6\t\8\y\n\j\h\k\c\u\2\u\o\d\c\a\c\i\d\f\1\j\q\s\7\n\7\k\3\b\b\7\9\t\2\l\u\m\i\v\1\m\u\o\1\c\9\x\r\t\3\a\a\1\r\0\m\m\5\i\q\p\r\0\7\8\i\7\b\h\m\b\8\v\c\8\x\5\1 ]] 00:07:00.543 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:00.543 15:54:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:00.543 [2024-11-20 15:54:58.783666] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:00.543 [2024-11-20 15:54:58.783775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60818 ] 00:07:00.800 [2024-11-20 15:54:58.937347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.800 [2024-11-20 15:54:59.008208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.058 [2024-11-20 15:54:59.065036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.058  [2024-11-20T15:54:59.564Z] Copying: 512/512 [B] (average 500 kBps) 00:07:01.314 00:07:01.314 15:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0ufh1wjncpitijw47u1ctb7jh82wmjqb2t51d8pb357x2om75kn4khd171q8xe60ez3wknrf6pxv5zbx02dxvqelu84lg2yxg2naba2ozqjzgm1iotxk5s7d574ygbtidi41hpiaijo7f8ixihtnco4nnb9greb7ug8uibgxdzyx17z41hg50pqfrtpun2h3iao40mjgjcwgvi5irqvo9lwuoguqcga4ub6vg3uqg7uyzvxogaxefatgq6hgwoq04tew54wv2wokovpvj0ciue8o24dy6keb4vlmq05pnpu5y36el53x1t12lji0c9g39ehof60mnf1d0ccyypb5wws316szdtcihz85s4k4wij2pdv08jvt0mrmutzfsc9rid17v4f5z9kolic07vj0jfvc59gq8xn6kqlduliu4aizkh4cb6t8ynjhkcu2uodcacidf1jqs7n7k3bb79t2lumiv1muo1c9xrt3aa1r0mm5iqpr078i7bhmb8vc8x51 == \0\u\f\h\1\w\j\n\c\p\i\t\i\j\w\4\7\u\1\c\t\b\7\j\h\8\2\w\m\j\q\b\2\t\5\1\d\8\p\b\3\5\7\x\2\o\m\7\5\k\n\4\k\h\d\1\7\1\q\8\x\e\6\0\e\z\3\w\k\n\r\f\6\p\x\v\5\z\b\x\0\2\d\x\v\q\e\l\u\8\4\l\g\2\y\x\g\2\n\a\b\a\2\o\z\q\j\z\g\m\1\i\o\t\x\k\5\s\7\d\5\7\4\y\g\b\t\i\d\i\4\1\h\p\i\a\i\j\o\7\f\8\i\x\i\h\t\n\c\o\4\n\n\b\9\g\r\e\b\7\u\g\8\u\i\b\g\x\d\z\y\x\1\7\z\4\1\h\g\5\0\p\q\f\r\t\p\u\n\2\h\3\i\a\o\4\0\m\j\g\j\c\w\g\v\i\5\i\r\q\v\o\9\l\w\u\o\g\u\q\c\g\a\4\u\b\6\v\g\3\u\q\g\7\u\y\z\v\x\o\g\a\x\e\f\a\t\g\q\6\h\g\w\o\q\0\4\t\e\w\5\4\w\v\2\w\o\k\o\v\p\v\j\0\c\i\u\e\8\o\2\4\d\y\6\k\e\b\4\v\l\m\q\0\5\p\n\p\u\5\y\3\6\e\l\5\3\x\1\t\1\2\l\j\i\0\c\9\g\3\9\e\h\o\f\6\0\m\n\f\1\d\0\c\c\y\y\p\b\5\w\w\s\3\1\6\s\z\d\t\c\i\h\z\8\5\s\4\k\4\w\i\j\2\p\d\v\0\8\j\v\t\0\m\r\m\u\t\z\f\s\c\9\r\i\d\1\7\v\4\f\5\z\9\k\o\l\i\c\0\7\v\j\0\j\f\v\c\5\9\g\q\8\x\n\6\k\q\l\d\u\l\i\u\4\a\i\z\k\h\4\c\b\6\t\8\y\n\j\h\k\c\u\2\u\o\d\c\a\c\i\d\f\1\j\q\s\7\n\7\k\3\b\b\7\9\t\2\l\u\m\i\v\1\m\u\o\1\c\9\x\r\t\3\a\a\1\r\0\m\m\5\i\q\p\r\0\7\8\i\7\b\h\m\b\8\v\c\8\x\5\1 ]] 00:07:01.314 15:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:01.314 15:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:01.314 [2024-11-20 15:54:59.373157] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:01.314 [2024-11-20 15:54:59.373298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60827 ] 00:07:01.314 [2024-11-20 15:54:59.527764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.572 [2024-11-20 15:54:59.604484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.572 [2024-11-20 15:54:59.665747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.572  [2024-11-20T15:55:00.080Z] Copying: 512/512 [B] (average 500 kBps) 00:07:01.830 00:07:01.830 15:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0ufh1wjncpitijw47u1ctb7jh82wmjqb2t51d8pb357x2om75kn4khd171q8xe60ez3wknrf6pxv5zbx02dxvqelu84lg2yxg2naba2ozqjzgm1iotxk5s7d574ygbtidi41hpiaijo7f8ixihtnco4nnb9greb7ug8uibgxdzyx17z41hg50pqfrtpun2h3iao40mjgjcwgvi5irqvo9lwuoguqcga4ub6vg3uqg7uyzvxogaxefatgq6hgwoq04tew54wv2wokovpvj0ciue8o24dy6keb4vlmq05pnpu5y36el53x1t12lji0c9g39ehof60mnf1d0ccyypb5wws316szdtcihz85s4k4wij2pdv08jvt0mrmutzfsc9rid17v4f5z9kolic07vj0jfvc59gq8xn6kqlduliu4aizkh4cb6t8ynjhkcu2uodcacidf1jqs7n7k3bb79t2lumiv1muo1c9xrt3aa1r0mm5iqpr078i7bhmb8vc8x51 == \0\u\f\h\1\w\j\n\c\p\i\t\i\j\w\4\7\u\1\c\t\b\7\j\h\8\2\w\m\j\q\b\2\t\5\1\d\8\p\b\3\5\7\x\2\o\m\7\5\k\n\4\k\h\d\1\7\1\q\8\x\e\6\0\e\z\3\w\k\n\r\f\6\p\x\v\5\z\b\x\0\2\d\x\v\q\e\l\u\8\4\l\g\2\y\x\g\2\n\a\b\a\2\o\z\q\j\z\g\m\1\i\o\t\x\k\5\s\7\d\5\7\4\y\g\b\t\i\d\i\4\1\h\p\i\a\i\j\o\7\f\8\i\x\i\h\t\n\c\o\4\n\n\b\9\g\r\e\b\7\u\g\8\u\i\b\g\x\d\z\y\x\1\7\z\4\1\h\g\5\0\p\q\f\r\t\p\u\n\2\h\3\i\a\o\4\0\m\j\g\j\c\w\g\v\i\5\i\r\q\v\o\9\l\w\u\o\g\u\q\c\g\a\4\u\b\6\v\g\3\u\q\g\7\u\y\z\v\x\o\g\a\x\e\f\a\t\g\q\6\h\g\w\o\q\0\4\t\e\w\5\4\w\v\2\w\o\k\o\v\p\v\j\0\c\i\u\e\8\o\2\4\d\y\6\k\e\b\4\v\l\m\q\0\5\p\n\p\u\5\y\3\6\e\l\5\3\x\1\t\1\2\l\j\i\0\c\9\g\3\9\e\h\o\f\6\0\m\n\f\1\d\0\c\c\y\y\p\b\5\w\w\s\3\1\6\s\z\d\t\c\i\h\z\8\5\s\4\k\4\w\i\j\2\p\d\v\0\8\j\v\t\0\m\r\m\u\t\z\f\s\c\9\r\i\d\1\7\v\4\f\5\z\9\k\o\l\i\c\0\7\v\j\0\j\f\v\c\5\9\g\q\8\x\n\6\k\q\l\d\u\l\i\u\4\a\i\z\k\h\4\c\b\6\t\8\y\n\j\h\k\c\u\2\u\o\d\c\a\c\i\d\f\1\j\q\s\7\n\7\k\3\b\b\7\9\t\2\l\u\m\i\v\1\m\u\o\1\c\9\x\r\t\3\a\a\1\r\0\m\m\5\i\q\p\r\0\7\8\i\7\b\h\m\b\8\v\c\8\x\5\1 ]] 00:07:01.830 15:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:01.830 15:54:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:01.830 [2024-11-20 15:54:59.965447] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:01.830 [2024-11-20 15:54:59.965534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60834 ] 00:07:02.088 [2024-11-20 15:55:00.114186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.088 [2024-11-20 15:55:00.188908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.088 [2024-11-20 15:55:00.247060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.088  [2024-11-20T15:55:00.596Z] Copying: 512/512 [B] (average 500 kBps) 00:07:02.346 00:07:02.347 15:55:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0ufh1wjncpitijw47u1ctb7jh82wmjqb2t51d8pb357x2om75kn4khd171q8xe60ez3wknrf6pxv5zbx02dxvqelu84lg2yxg2naba2ozqjzgm1iotxk5s7d574ygbtidi41hpiaijo7f8ixihtnco4nnb9greb7ug8uibgxdzyx17z41hg50pqfrtpun2h3iao40mjgjcwgvi5irqvo9lwuoguqcga4ub6vg3uqg7uyzvxogaxefatgq6hgwoq04tew54wv2wokovpvj0ciue8o24dy6keb4vlmq05pnpu5y36el53x1t12lji0c9g39ehof60mnf1d0ccyypb5wws316szdtcihz85s4k4wij2pdv08jvt0mrmutzfsc9rid17v4f5z9kolic07vj0jfvc59gq8xn6kqlduliu4aizkh4cb6t8ynjhkcu2uodcacidf1jqs7n7k3bb79t2lumiv1muo1c9xrt3aa1r0mm5iqpr078i7bhmb8vc8x51 == \0\u\f\h\1\w\j\n\c\p\i\t\i\j\w\4\7\u\1\c\t\b\7\j\h\8\2\w\m\j\q\b\2\t\5\1\d\8\p\b\3\5\7\x\2\o\m\7\5\k\n\4\k\h\d\1\7\1\q\8\x\e\6\0\e\z\3\w\k\n\r\f\6\p\x\v\5\z\b\x\0\2\d\x\v\q\e\l\u\8\4\l\g\2\y\x\g\2\n\a\b\a\2\o\z\q\j\z\g\m\1\i\o\t\x\k\5\s\7\d\5\7\4\y\g\b\t\i\d\i\4\1\h\p\i\a\i\j\o\7\f\8\i\x\i\h\t\n\c\o\4\n\n\b\9\g\r\e\b\7\u\g\8\u\i\b\g\x\d\z\y\x\1\7\z\4\1\h\g\5\0\p\q\f\r\t\p\u\n\2\h\3\i\a\o\4\0\m\j\g\j\c\w\g\v\i\5\i\r\q\v\o\9\l\w\u\o\g\u\q\c\g\a\4\u\b\6\v\g\3\u\q\g\7\u\y\z\v\x\o\g\a\x\e\f\a\t\g\q\6\h\g\w\o\q\0\4\t\e\w\5\4\w\v\2\w\o\k\o\v\p\v\j\0\c\i\u\e\8\o\2\4\d\y\6\k\e\b\4\v\l\m\q\0\5\p\n\p\u\5\y\3\6\e\l\5\3\x\1\t\1\2\l\j\i\0\c\9\g\3\9\e\h\o\f\6\0\m\n\f\1\d\0\c\c\y\y\p\b\5\w\w\s\3\1\6\s\z\d\t\c\i\h\z\8\5\s\4\k\4\w\i\j\2\p\d\v\0\8\j\v\t\0\m\r\m\u\t\z\f\s\c\9\r\i\d\1\7\v\4\f\5\z\9\k\o\l\i\c\0\7\v\j\0\j\f\v\c\5\9\g\q\8\x\n\6\k\q\l\d\u\l\i\u\4\a\i\z\k\h\4\c\b\6\t\8\y\n\j\h\k\c\u\2\u\o\d\c\a\c\i\d\f\1\j\q\s\7\n\7\k\3\b\b\7\9\t\2\l\u\m\i\v\1\m\u\o\1\c\9\x\r\t\3\a\a\1\r\0\m\m\5\i\q\p\r\0\7\8\i\7\b\h\m\b\8\v\c\8\x\5\1 ]] 00:07:02.347 15:55:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:02.347 15:55:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:02.347 15:55:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:02.347 15:55:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:02.347 15:55:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.347 15:55:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:02.347 [2024-11-20 15:55:00.583139] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:02.347 [2024-11-20 15:55:00.583436] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60842 ] 00:07:02.604 [2024-11-20 15:55:00.733754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.605 [2024-11-20 15:55:00.796949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.863 [2024-11-20 15:55:00.861914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:02.863  [2024-11-20T15:55:01.372Z] Copying: 512/512 [B] (average 500 kBps) 00:07:03.122 00:07:03.122 15:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ az6anm2k2r5bs8v2o12iq2b88h7n5t63wkqqve9bafh8f30e6nyg6ddwyrgkn0bh9eeldxuztwu4q7izucl8ol2allj9jsxy5hg1fu2zy9tn5pb31qyeb9w2xpvv16vh72gk1rjda81qjz227gnqwsaev1t1z39dqveyggx0hsjvbqcrqxole8cbpzioatgp1yahpzxpbivp1o2ui8moh0q9sm5ctw5dwvzzn29yiqoqhypxhn2ap8hzhkmz1lqzge04znkst6lhizciqt9jc9s0dypasl3lkivvuwlyxqvb5cag68gezuh854xxsmucur8yz1tvkc8ntd70g21ip17yhu0vpg7jwqd4yyj17awxfy1wth2qf6k9rgqdmzlphnq8kr80edsmuwa98cexemp0zv4hl32gj0wxgz9ncoq2mpu109odzp8xpxy9jdzh0pu6cempikgiegfqkhtsq5ocsphy3fygdrvhpnacjzy76utzf19trny9gvxin63a == \a\z\6\a\n\m\2\k\2\r\5\b\s\8\v\2\o\1\2\i\q\2\b\8\8\h\7\n\5\t\6\3\w\k\q\q\v\e\9\b\a\f\h\8\f\3\0\e\6\n\y\g\6\d\d\w\y\r\g\k\n\0\b\h\9\e\e\l\d\x\u\z\t\w\u\4\q\7\i\z\u\c\l\8\o\l\2\a\l\l\j\9\j\s\x\y\5\h\g\1\f\u\2\z\y\9\t\n\5\p\b\3\1\q\y\e\b\9\w\2\x\p\v\v\1\6\v\h\7\2\g\k\1\r\j\d\a\8\1\q\j\z\2\2\7\g\n\q\w\s\a\e\v\1\t\1\z\3\9\d\q\v\e\y\g\g\x\0\h\s\j\v\b\q\c\r\q\x\o\l\e\8\c\b\p\z\i\o\a\t\g\p\1\y\a\h\p\z\x\p\b\i\v\p\1\o\2\u\i\8\m\o\h\0\q\9\s\m\5\c\t\w\5\d\w\v\z\z\n\2\9\y\i\q\o\q\h\y\p\x\h\n\2\a\p\8\h\z\h\k\m\z\1\l\q\z\g\e\0\4\z\n\k\s\t\6\l\h\i\z\c\i\q\t\9\j\c\9\s\0\d\y\p\a\s\l\3\l\k\i\v\v\u\w\l\y\x\q\v\b\5\c\a\g\6\8\g\e\z\u\h\8\5\4\x\x\s\m\u\c\u\r\8\y\z\1\t\v\k\c\8\n\t\d\7\0\g\2\1\i\p\1\7\y\h\u\0\v\p\g\7\j\w\q\d\4\y\y\j\1\7\a\w\x\f\y\1\w\t\h\2\q\f\6\k\9\r\g\q\d\m\z\l\p\h\n\q\8\k\r\8\0\e\d\s\m\u\w\a\9\8\c\e\x\e\m\p\0\z\v\4\h\l\3\2\g\j\0\w\x\g\z\9\n\c\o\q\2\m\p\u\1\0\9\o\d\z\p\8\x\p\x\y\9\j\d\z\h\0\p\u\6\c\e\m\p\i\k\g\i\e\g\f\q\k\h\t\s\q\5\o\c\s\p\h\y\3\f\y\g\d\r\v\h\p\n\a\c\j\z\y\7\6\u\t\z\f\1\9\t\r\n\y\9\g\v\x\i\n\6\3\a ]] 00:07:03.122 15:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.122 15:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:03.122 [2024-11-20 15:55:01.191495] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:03.122 [2024-11-20 15:55:01.191611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60855 ] 00:07:03.122 [2024-11-20 15:55:01.342560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.380 [2024-11-20 15:55:01.416073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.380 [2024-11-20 15:55:01.477671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.380  [2024-11-20T15:55:01.888Z] Copying: 512/512 [B] (average 500 kBps) 00:07:03.638 00:07:03.638 15:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ az6anm2k2r5bs8v2o12iq2b88h7n5t63wkqqve9bafh8f30e6nyg6ddwyrgkn0bh9eeldxuztwu4q7izucl8ol2allj9jsxy5hg1fu2zy9tn5pb31qyeb9w2xpvv16vh72gk1rjda81qjz227gnqwsaev1t1z39dqveyggx0hsjvbqcrqxole8cbpzioatgp1yahpzxpbivp1o2ui8moh0q9sm5ctw5dwvzzn29yiqoqhypxhn2ap8hzhkmz1lqzge04znkst6lhizciqt9jc9s0dypasl3lkivvuwlyxqvb5cag68gezuh854xxsmucur8yz1tvkc8ntd70g21ip17yhu0vpg7jwqd4yyj17awxfy1wth2qf6k9rgqdmzlphnq8kr80edsmuwa98cexemp0zv4hl32gj0wxgz9ncoq2mpu109odzp8xpxy9jdzh0pu6cempikgiegfqkhtsq5ocsphy3fygdrvhpnacjzy76utzf19trny9gvxin63a == \a\z\6\a\n\m\2\k\2\r\5\b\s\8\v\2\o\1\2\i\q\2\b\8\8\h\7\n\5\t\6\3\w\k\q\q\v\e\9\b\a\f\h\8\f\3\0\e\6\n\y\g\6\d\d\w\y\r\g\k\n\0\b\h\9\e\e\l\d\x\u\z\t\w\u\4\q\7\i\z\u\c\l\8\o\l\2\a\l\l\j\9\j\s\x\y\5\h\g\1\f\u\2\z\y\9\t\n\5\p\b\3\1\q\y\e\b\9\w\2\x\p\v\v\1\6\v\h\7\2\g\k\1\r\j\d\a\8\1\q\j\z\2\2\7\g\n\q\w\s\a\e\v\1\t\1\z\3\9\d\q\v\e\y\g\g\x\0\h\s\j\v\b\q\c\r\q\x\o\l\e\8\c\b\p\z\i\o\a\t\g\p\1\y\a\h\p\z\x\p\b\i\v\p\1\o\2\u\i\8\m\o\h\0\q\9\s\m\5\c\t\w\5\d\w\v\z\z\n\2\9\y\i\q\o\q\h\y\p\x\h\n\2\a\p\8\h\z\h\k\m\z\1\l\q\z\g\e\0\4\z\n\k\s\t\6\l\h\i\z\c\i\q\t\9\j\c\9\s\0\d\y\p\a\s\l\3\l\k\i\v\v\u\w\l\y\x\q\v\b\5\c\a\g\6\8\g\e\z\u\h\8\5\4\x\x\s\m\u\c\u\r\8\y\z\1\t\v\k\c\8\n\t\d\7\0\g\2\1\i\p\1\7\y\h\u\0\v\p\g\7\j\w\q\d\4\y\y\j\1\7\a\w\x\f\y\1\w\t\h\2\q\f\6\k\9\r\g\q\d\m\z\l\p\h\n\q\8\k\r\8\0\e\d\s\m\u\w\a\9\8\c\e\x\e\m\p\0\z\v\4\h\l\3\2\g\j\0\w\x\g\z\9\n\c\o\q\2\m\p\u\1\0\9\o\d\z\p\8\x\p\x\y\9\j\d\z\h\0\p\u\6\c\e\m\p\i\k\g\i\e\g\f\q\k\h\t\s\q\5\o\c\s\p\h\y\3\f\y\g\d\r\v\h\p\n\a\c\j\z\y\7\6\u\t\z\f\1\9\t\r\n\y\9\g\v\x\i\n\6\3\a ]] 00:07:03.638 15:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:03.638 15:55:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:03.638 [2024-11-20 15:55:01.784470] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:03.638 [2024-11-20 15:55:01.784582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60857 ] 00:07:03.896 [2024-11-20 15:55:01.934483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.896 [2024-11-20 15:55:02.002365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.896 [2024-11-20 15:55:02.058726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.896  [2024-11-20T15:55:02.419Z] Copying: 512/512 [B] (average 500 kBps) 00:07:04.169 00:07:04.169 15:55:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ az6anm2k2r5bs8v2o12iq2b88h7n5t63wkqqve9bafh8f30e6nyg6ddwyrgkn0bh9eeldxuztwu4q7izucl8ol2allj9jsxy5hg1fu2zy9tn5pb31qyeb9w2xpvv16vh72gk1rjda81qjz227gnqwsaev1t1z39dqveyggx0hsjvbqcrqxole8cbpzioatgp1yahpzxpbivp1o2ui8moh0q9sm5ctw5dwvzzn29yiqoqhypxhn2ap8hzhkmz1lqzge04znkst6lhizciqt9jc9s0dypasl3lkivvuwlyxqvb5cag68gezuh854xxsmucur8yz1tvkc8ntd70g21ip17yhu0vpg7jwqd4yyj17awxfy1wth2qf6k9rgqdmzlphnq8kr80edsmuwa98cexemp0zv4hl32gj0wxgz9ncoq2mpu109odzp8xpxy9jdzh0pu6cempikgiegfqkhtsq5ocsphy3fygdrvhpnacjzy76utzf19trny9gvxin63a == \a\z\6\a\n\m\2\k\2\r\5\b\s\8\v\2\o\1\2\i\q\2\b\8\8\h\7\n\5\t\6\3\w\k\q\q\v\e\9\b\a\f\h\8\f\3\0\e\6\n\y\g\6\d\d\w\y\r\g\k\n\0\b\h\9\e\e\l\d\x\u\z\t\w\u\4\q\7\i\z\u\c\l\8\o\l\2\a\l\l\j\9\j\s\x\y\5\h\g\1\f\u\2\z\y\9\t\n\5\p\b\3\1\q\y\e\b\9\w\2\x\p\v\v\1\6\v\h\7\2\g\k\1\r\j\d\a\8\1\q\j\z\2\2\7\g\n\q\w\s\a\e\v\1\t\1\z\3\9\d\q\v\e\y\g\g\x\0\h\s\j\v\b\q\c\r\q\x\o\l\e\8\c\b\p\z\i\o\a\t\g\p\1\y\a\h\p\z\x\p\b\i\v\p\1\o\2\u\i\8\m\o\h\0\q\9\s\m\5\c\t\w\5\d\w\v\z\z\n\2\9\y\i\q\o\q\h\y\p\x\h\n\2\a\p\8\h\z\h\k\m\z\1\l\q\z\g\e\0\4\z\n\k\s\t\6\l\h\i\z\c\i\q\t\9\j\c\9\s\0\d\y\p\a\s\l\3\l\k\i\v\v\u\w\l\y\x\q\v\b\5\c\a\g\6\8\g\e\z\u\h\8\5\4\x\x\s\m\u\c\u\r\8\y\z\1\t\v\k\c\8\n\t\d\7\0\g\2\1\i\p\1\7\y\h\u\0\v\p\g\7\j\w\q\d\4\y\y\j\1\7\a\w\x\f\y\1\w\t\h\2\q\f\6\k\9\r\g\q\d\m\z\l\p\h\n\q\8\k\r\8\0\e\d\s\m\u\w\a\9\8\c\e\x\e\m\p\0\z\v\4\h\l\3\2\g\j\0\w\x\g\z\9\n\c\o\q\2\m\p\u\1\0\9\o\d\z\p\8\x\p\x\y\9\j\d\z\h\0\p\u\6\c\e\m\p\i\k\g\i\e\g\f\q\k\h\t\s\q\5\o\c\s\p\h\y\3\f\y\g\d\r\v\h\p\n\a\c\j\z\y\7\6\u\t\z\f\1\9\t\r\n\y\9\g\v\x\i\n\6\3\a ]] 00:07:04.169 15:55:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:04.169 15:55:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:04.169 [2024-11-20 15:55:02.360794] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:04.169 [2024-11-20 15:55:02.360912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60870 ] 00:07:04.439 [2024-11-20 15:55:02.509900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.439 [2024-11-20 15:55:02.566365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.439 [2024-11-20 15:55:02.621149] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.439  [2024-11-20T15:55:02.947Z] Copying: 512/512 [B] (average 500 kBps) 00:07:04.697 00:07:04.698 15:55:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ az6anm2k2r5bs8v2o12iq2b88h7n5t63wkqqve9bafh8f30e6nyg6ddwyrgkn0bh9eeldxuztwu4q7izucl8ol2allj9jsxy5hg1fu2zy9tn5pb31qyeb9w2xpvv16vh72gk1rjda81qjz227gnqwsaev1t1z39dqveyggx0hsjvbqcrqxole8cbpzioatgp1yahpzxpbivp1o2ui8moh0q9sm5ctw5dwvzzn29yiqoqhypxhn2ap8hzhkmz1lqzge04znkst6lhizciqt9jc9s0dypasl3lkivvuwlyxqvb5cag68gezuh854xxsmucur8yz1tvkc8ntd70g21ip17yhu0vpg7jwqd4yyj17awxfy1wth2qf6k9rgqdmzlphnq8kr80edsmuwa98cexemp0zv4hl32gj0wxgz9ncoq2mpu109odzp8xpxy9jdzh0pu6cempikgiegfqkhtsq5ocsphy3fygdrvhpnacjzy76utzf19trny9gvxin63a == \a\z\6\a\n\m\2\k\2\r\5\b\s\8\v\2\o\1\2\i\q\2\b\8\8\h\7\n\5\t\6\3\w\k\q\q\v\e\9\b\a\f\h\8\f\3\0\e\6\n\y\g\6\d\d\w\y\r\g\k\n\0\b\h\9\e\e\l\d\x\u\z\t\w\u\4\q\7\i\z\u\c\l\8\o\l\2\a\l\l\j\9\j\s\x\y\5\h\g\1\f\u\2\z\y\9\t\n\5\p\b\3\1\q\y\e\b\9\w\2\x\p\v\v\1\6\v\h\7\2\g\k\1\r\j\d\a\8\1\q\j\z\2\2\7\g\n\q\w\s\a\e\v\1\t\1\z\3\9\d\q\v\e\y\g\g\x\0\h\s\j\v\b\q\c\r\q\x\o\l\e\8\c\b\p\z\i\o\a\t\g\p\1\y\a\h\p\z\x\p\b\i\v\p\1\o\2\u\i\8\m\o\h\0\q\9\s\m\5\c\t\w\5\d\w\v\z\z\n\2\9\y\i\q\o\q\h\y\p\x\h\n\2\a\p\8\h\z\h\k\m\z\1\l\q\z\g\e\0\4\z\n\k\s\t\6\l\h\i\z\c\i\q\t\9\j\c\9\s\0\d\y\p\a\s\l\3\l\k\i\v\v\u\w\l\y\x\q\v\b\5\c\a\g\6\8\g\e\z\u\h\8\5\4\x\x\s\m\u\c\u\r\8\y\z\1\t\v\k\c\8\n\t\d\7\0\g\2\1\i\p\1\7\y\h\u\0\v\p\g\7\j\w\q\d\4\y\y\j\1\7\a\w\x\f\y\1\w\t\h\2\q\f\6\k\9\r\g\q\d\m\z\l\p\h\n\q\8\k\r\8\0\e\d\s\m\u\w\a\9\8\c\e\x\e\m\p\0\z\v\4\h\l\3\2\g\j\0\w\x\g\z\9\n\c\o\q\2\m\p\u\1\0\9\o\d\z\p\8\x\p\x\y\9\j\d\z\h\0\p\u\6\c\e\m\p\i\k\g\i\e\g\f\q\k\h\t\s\q\5\o\c\s\p\h\y\3\f\y\g\d\r\v\h\p\n\a\c\j\z\y\7\6\u\t\z\f\1\9\t\r\n\y\9\g\v\x\i\n\6\3\a ]] 00:07:04.698 00:07:04.698 real 0m4.709s 00:07:04.698 user 0m2.569s 00:07:04.698 sys 0m1.166s 00:07:04.698 15:55:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.698 ************************************ 00:07:04.698 END TEST dd_flags_misc_forced_aio 00:07:04.698 ************************************ 00:07:04.698 15:55:02 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:04.698 15:55:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:04.698 15:55:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:04.698 15:55:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:04.698 ************************************ 00:07:04.698 END TEST spdk_dd_posix 00:07:04.698 ************************************ 00:07:04.698 00:07:04.698 real 0m21.313s 00:07:04.698 user 0m10.545s 00:07:04.698 sys 0m6.802s 00:07:04.698 15:55:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.698 15:55:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:04.698 15:55:02 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:04.698 15:55:02 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.698 15:55:02 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.698 15:55:02 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:04.957 ************************************ 00:07:04.957 START TEST spdk_dd_malloc 00:07:04.957 ************************************ 00:07:04.957 15:55:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:04.957 * Looking for test storage... 00:07:04.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.957 --rc genhtml_branch_coverage=1 00:07:04.957 --rc genhtml_function_coverage=1 00:07:04.957 --rc genhtml_legend=1 00:07:04.957 --rc geninfo_all_blocks=1 00:07:04.957 --rc geninfo_unexecuted_blocks=1 00:07:04.957 00:07:04.957 ' 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.957 --rc genhtml_branch_coverage=1 00:07:04.957 --rc genhtml_function_coverage=1 00:07:04.957 --rc genhtml_legend=1 00:07:04.957 --rc geninfo_all_blocks=1 00:07:04.957 --rc geninfo_unexecuted_blocks=1 00:07:04.957 00:07:04.957 ' 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.957 --rc genhtml_branch_coverage=1 00:07:04.957 --rc genhtml_function_coverage=1 00:07:04.957 --rc genhtml_legend=1 00:07:04.957 --rc geninfo_all_blocks=1 00:07:04.957 --rc geninfo_unexecuted_blocks=1 00:07:04.957 00:07:04.957 ' 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.957 --rc genhtml_branch_coverage=1 00:07:04.957 --rc genhtml_function_coverage=1 00:07:04.957 --rc genhtml_legend=1 00:07:04.957 --rc geninfo_all_blocks=1 00:07:04.957 --rc geninfo_unexecuted_blocks=1 00:07:04.957 00:07:04.957 ' 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:04.957 ************************************ 00:07:04.957 START TEST dd_malloc_copy 00:07:04.957 ************************************ 00:07:04.957 15:55:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:07:04.958 15:55:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:04.958 15:55:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:04.958 15:55:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:04.958 15:55:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:04.958 15:55:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:04.958 15:55:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:04.958 15:55:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:04.958 15:55:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:04.958 15:55:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:04.958 15:55:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:04.958 [2024-11-20 15:55:03.195440] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:04.958 [2024-11-20 15:55:03.195541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60952 ] 00:07:04.958 { 00:07:04.958 "subsystems": [ 00:07:04.958 { 00:07:04.958 "subsystem": "bdev", 00:07:04.958 "config": [ 00:07:04.958 { 00:07:04.958 "params": { 00:07:04.958 "block_size": 512, 00:07:04.958 "num_blocks": 1048576, 00:07:04.958 "name": "malloc0" 00:07:04.958 }, 00:07:04.958 "method": "bdev_malloc_create" 00:07:04.958 }, 00:07:04.958 { 00:07:04.958 "params": { 00:07:04.958 "block_size": 512, 00:07:04.958 "num_blocks": 1048576, 00:07:04.958 "name": "malloc1" 00:07:04.958 }, 00:07:04.958 "method": "bdev_malloc_create" 00:07:04.958 }, 00:07:04.958 { 00:07:04.958 "method": "bdev_wait_for_examine" 00:07:04.958 } 00:07:04.958 ] 00:07:04.958 } 00:07:04.958 ] 00:07:04.958 } 00:07:05.216 [2024-11-20 15:55:03.342548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.216 [2024-11-20 15:55:03.398654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.216 [2024-11-20 15:55:03.452071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.594  [2024-11-20T15:55:06.216Z] Copying: 196/512 [MB] (196 MBps) [2024-11-20T15:55:06.474Z] Copying: 390/512 [MB] (193 MBps) [2024-11-20T15:55:07.040Z] Copying: 512/512 [MB] (average 192 MBps) 00:07:08.790 00:07:08.790 15:55:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:08.790 15:55:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:08.790 15:55:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:08.790 15:55:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:09.049 { 00:07:09.049 "subsystems": [ 00:07:09.049 { 00:07:09.049 "subsystem": "bdev", 00:07:09.049 "config": [ 00:07:09.049 { 00:07:09.049 "params": { 00:07:09.049 "block_size": 512, 00:07:09.049 "num_blocks": 1048576, 00:07:09.049 "name": "malloc0" 00:07:09.049 }, 00:07:09.049 "method": "bdev_malloc_create" 00:07:09.049 }, 00:07:09.049 { 00:07:09.049 "params": { 00:07:09.049 "block_size": 512, 00:07:09.049 "num_blocks": 1048576, 00:07:09.049 "name": "malloc1" 00:07:09.049 }, 00:07:09.049 "method": "bdev_malloc_create" 00:07:09.049 }, 00:07:09.049 { 00:07:09.049 "method": "bdev_wait_for_examine" 00:07:09.049 } 00:07:09.049 ] 00:07:09.049 } 00:07:09.049 ] 00:07:09.049 } 00:07:09.049 [2024-11-20 15:55:07.087251] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:09.049 [2024-11-20 15:55:07.087383] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60994 ] 00:07:09.049 [2024-11-20 15:55:07.243990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.049 [2024-11-20 15:55:07.292670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.308 [2024-11-20 15:55:07.348261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.682  [2024-11-20T15:55:09.866Z] Copying: 199/512 [MB] (199 MBps) [2024-11-20T15:55:10.432Z] Copying: 396/512 [MB] (196 MBps) [2024-11-20T15:55:10.998Z] Copying: 512/512 [MB] (average 198 MBps) 00:07:12.748 00:07:12.748 ************************************ 00:07:12.748 END TEST dd_malloc_copy 00:07:12.748 ************************************ 00:07:12.748 00:07:12.748 real 0m7.718s 00:07:12.748 user 0m6.745s 00:07:12.748 sys 0m0.825s 00:07:12.748 15:55:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.748 15:55:10 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:12.748 ************************************ 00:07:12.748 END TEST spdk_dd_malloc 00:07:12.748 ************************************ 00:07:12.748 00:07:12.748 real 0m7.951s 00:07:12.748 user 0m6.882s 00:07:12.748 sys 0m0.923s 00:07:12.748 15:55:10 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.748 15:55:10 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:12.748 15:55:10 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:12.748 15:55:10 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:12.748 15:55:10 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.748 15:55:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:12.748 ************************************ 00:07:12.748 START TEST spdk_dd_bdev_to_bdev 00:07:12.748 ************************************ 00:07:12.748 15:55:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:13.007 * Looking for test storage... 00:07:13.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:13.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.007 --rc genhtml_branch_coverage=1 00:07:13.007 --rc genhtml_function_coverage=1 00:07:13.007 --rc genhtml_legend=1 00:07:13.007 --rc geninfo_all_blocks=1 00:07:13.007 --rc geninfo_unexecuted_blocks=1 00:07:13.007 00:07:13.007 ' 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:13.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.007 --rc genhtml_branch_coverage=1 00:07:13.007 --rc genhtml_function_coverage=1 00:07:13.007 --rc genhtml_legend=1 00:07:13.007 --rc geninfo_all_blocks=1 00:07:13.007 --rc geninfo_unexecuted_blocks=1 00:07:13.007 00:07:13.007 ' 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:13.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.007 --rc genhtml_branch_coverage=1 00:07:13.007 --rc genhtml_function_coverage=1 00:07:13.007 --rc genhtml_legend=1 00:07:13.007 --rc geninfo_all_blocks=1 00:07:13.007 --rc geninfo_unexecuted_blocks=1 00:07:13.007 00:07:13.007 ' 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:13.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.007 --rc genhtml_branch_coverage=1 00:07:13.007 --rc genhtml_function_coverage=1 00:07:13.007 --rc genhtml_legend=1 00:07:13.007 --rc geninfo_all_blocks=1 00:07:13.007 --rc geninfo_unexecuted_blocks=1 00:07:13.007 00:07:13.007 ' 00:07:13.007 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:13.008 ************************************ 00:07:13.008 START TEST dd_inflate_file 00:07:13.008 ************************************ 00:07:13.008 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:13.008 [2024-11-20 15:55:11.205887] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:13.008 [2024-11-20 15:55:11.206152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61112 ] 00:07:13.267 [2024-11-20 15:55:11.357168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.267 [2024-11-20 15:55:11.417861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.267 [2024-11-20 15:55:11.475560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.525  [2024-11-20T15:55:11.775Z] Copying: 64/64 [MB] (average 1523 MBps) 00:07:13.525 00:07:13.525 00:07:13.525 real 0m0.595s 00:07:13.525 user 0m0.349s 00:07:13.525 sys 0m0.302s 00:07:13.525 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.525 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:13.525 ************************************ 00:07:13.525 END TEST dd_inflate_file 00:07:13.525 ************************************ 00:07:13.783 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:13.783 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:13.783 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:13.783 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:13.783 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:13.783 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:13.783 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.783 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:13.783 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:13.783 ************************************ 00:07:13.783 START TEST dd_copy_to_out_bdev 00:07:13.783 ************************************ 00:07:13.783 15:55:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:13.783 [2024-11-20 15:55:11.850021] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:13.783 [2024-11-20 15:55:11.850238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61151 ] 00:07:13.783 { 00:07:13.783 "subsystems": [ 00:07:13.783 { 00:07:13.783 "subsystem": "bdev", 00:07:13.783 "config": [ 00:07:13.783 { 00:07:13.783 "params": { 00:07:13.783 "trtype": "pcie", 00:07:13.783 "traddr": "0000:00:10.0", 00:07:13.784 "name": "Nvme0" 00:07:13.784 }, 00:07:13.784 "method": "bdev_nvme_attach_controller" 00:07:13.784 }, 00:07:13.784 { 00:07:13.784 "params": { 00:07:13.784 "trtype": "pcie", 00:07:13.784 "traddr": "0000:00:11.0", 00:07:13.784 "name": "Nvme1" 00:07:13.784 }, 00:07:13.784 "method": "bdev_nvme_attach_controller" 00:07:13.784 }, 00:07:13.784 { 00:07:13.784 "method": "bdev_wait_for_examine" 00:07:13.784 } 00:07:13.784 ] 00:07:13.784 } 00:07:13.784 ] 00:07:13.784 } 00:07:13.784 [2024-11-20 15:55:11.992944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.042 [2024-11-20 15:55:12.046531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.042 [2024-11-20 15:55:12.101930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.415  [2024-11-20T15:55:13.665Z] Copying: 58/64 [MB] (58 MBps) [2024-11-20T15:55:13.665Z] Copying: 64/64 [MB] (average 58 MBps) 00:07:15.415 00:07:15.415 ************************************ 00:07:15.415 END TEST dd_copy_to_out_bdev 00:07:15.415 ************************************ 00:07:15.415 00:07:15.415 real 0m1.803s 00:07:15.415 user 0m1.574s 00:07:15.415 sys 0m1.454s 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:15.415 ************************************ 00:07:15.415 START TEST dd_offset_magic 00:07:15.415 ************************************ 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:15.415 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:15.673 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:15.673 15:55:13 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:15.673 { 00:07:15.673 "subsystems": [ 00:07:15.673 { 00:07:15.673 "subsystem": "bdev", 00:07:15.673 "config": [ 00:07:15.673 { 00:07:15.673 "params": { 00:07:15.673 "trtype": "pcie", 00:07:15.673 "traddr": "0000:00:10.0", 00:07:15.673 "name": "Nvme0" 00:07:15.673 }, 00:07:15.673 "method": "bdev_nvme_attach_controller" 00:07:15.673 }, 00:07:15.673 { 00:07:15.673 "params": { 00:07:15.673 "trtype": "pcie", 00:07:15.673 "traddr": "0000:00:11.0", 00:07:15.673 "name": "Nvme1" 00:07:15.673 }, 00:07:15.673 "method": "bdev_nvme_attach_controller" 00:07:15.673 }, 00:07:15.673 { 00:07:15.673 "method": "bdev_wait_for_examine" 00:07:15.673 } 00:07:15.673 ] 00:07:15.673 } 00:07:15.673 ] 00:07:15.673 } 00:07:15.673 [2024-11-20 15:55:13.718305] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:15.673 [2024-11-20 15:55:13.718587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61196 ] 00:07:15.673 [2024-11-20 15:55:13.868093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.673 [2024-11-20 15:55:13.919584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.932 [2024-11-20 15:55:13.977402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.190  [2024-11-20T15:55:14.698Z] Copying: 65/65 [MB] (average 928 MBps) 00:07:16.448 00:07:16.448 15:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:16.448 15:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:16.448 15:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:16.448 15:55:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:16.448 [2024-11-20 15:55:14.513750] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:16.448 [2024-11-20 15:55:14.513994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61212 ] 00:07:16.448 { 00:07:16.448 "subsystems": [ 00:07:16.448 { 00:07:16.448 "subsystem": "bdev", 00:07:16.448 "config": [ 00:07:16.448 { 00:07:16.448 "params": { 00:07:16.448 "trtype": "pcie", 00:07:16.448 "traddr": "0000:00:10.0", 00:07:16.448 "name": "Nvme0" 00:07:16.448 }, 00:07:16.448 "method": "bdev_nvme_attach_controller" 00:07:16.448 }, 00:07:16.448 { 00:07:16.448 "params": { 00:07:16.448 "trtype": "pcie", 00:07:16.448 "traddr": "0000:00:11.0", 00:07:16.448 "name": "Nvme1" 00:07:16.448 }, 00:07:16.448 "method": "bdev_nvme_attach_controller" 00:07:16.448 }, 00:07:16.448 { 00:07:16.448 "method": "bdev_wait_for_examine" 00:07:16.448 } 00:07:16.448 ] 00:07:16.448 } 00:07:16.448 ] 00:07:16.448 } 00:07:16.448 [2024-11-20 15:55:14.664108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.706 [2024-11-20 15:55:14.720035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.706 [2024-11-20 15:55:14.778776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.964  [2024-11-20T15:55:15.214Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:16.964 00:07:16.964 15:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:16.964 15:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:16.964 15:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:16.964 15:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:16.964 15:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:16.964 15:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:16.964 15:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:16.964 [2024-11-20 15:55:15.196240] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:16.964 [2024-11-20 15:55:15.196476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61227 ] 00:07:16.964 { 00:07:16.964 "subsystems": [ 00:07:16.964 { 00:07:16.964 "subsystem": "bdev", 00:07:16.964 "config": [ 00:07:16.964 { 00:07:16.964 "params": { 00:07:16.964 "trtype": "pcie", 00:07:16.964 "traddr": "0000:00:10.0", 00:07:16.964 "name": "Nvme0" 00:07:16.964 }, 00:07:16.964 "method": "bdev_nvme_attach_controller" 00:07:16.964 }, 00:07:16.964 { 00:07:16.964 "params": { 00:07:16.964 "trtype": "pcie", 00:07:16.964 "traddr": "0000:00:11.0", 00:07:16.964 "name": "Nvme1" 00:07:16.964 }, 00:07:16.964 "method": "bdev_nvme_attach_controller" 00:07:16.964 }, 00:07:16.964 { 00:07:16.964 "method": "bdev_wait_for_examine" 00:07:16.964 } 00:07:16.964 ] 00:07:16.964 } 00:07:16.964 ] 00:07:16.964 } 00:07:17.221 [2024-11-20 15:55:15.340325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.221 [2024-11-20 15:55:15.401572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.221 [2024-11-20 15:55:15.459213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.479  [2024-11-20T15:55:15.986Z] Copying: 65/65 [MB] (average 984 MBps) 00:07:17.736 00:07:17.736 15:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:17.736 15:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:17.736 15:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:17.736 15:55:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:17.998 { 00:07:17.998 "subsystems": [ 00:07:17.998 { 00:07:17.998 "subsystem": "bdev", 00:07:17.998 "config": [ 00:07:17.998 { 00:07:17.998 "params": { 00:07:17.998 "trtype": "pcie", 00:07:17.998 "traddr": "0000:00:10.0", 00:07:17.999 "name": "Nvme0" 00:07:17.999 }, 00:07:17.999 "method": "bdev_nvme_attach_controller" 00:07:17.999 }, 00:07:17.999 { 00:07:17.999 "params": { 00:07:17.999 "trtype": "pcie", 00:07:17.999 "traddr": "0000:00:11.0", 00:07:17.999 "name": "Nvme1" 00:07:17.999 }, 00:07:17.999 "method": "bdev_nvme_attach_controller" 00:07:17.999 }, 00:07:17.999 { 00:07:17.999 "method": "bdev_wait_for_examine" 00:07:17.999 } 00:07:17.999 ] 00:07:17.999 } 00:07:17.999 ] 00:07:17.999 } 00:07:17.999 [2024-11-20 15:55:16.010175] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:17.999 [2024-11-20 15:55:16.010341] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61247 ] 00:07:17.999 [2024-11-20 15:55:16.165698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.999 [2024-11-20 15:55:16.223537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.271 [2024-11-20 15:55:16.283211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.271  [2024-11-20T15:55:16.779Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:18.529 00:07:18.529 ************************************ 00:07:18.529 END TEST dd_offset_magic 00:07:18.529 ************************************ 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:18.529 00:07:18.529 real 0m3.009s 00:07:18.529 user 0m2.135s 00:07:18.529 sys 0m0.928s 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:18.529 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:18.530 15:55:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:18.530 { 00:07:18.530 "subsystems": [ 00:07:18.530 { 00:07:18.530 "subsystem": "bdev", 00:07:18.530 "config": [ 00:07:18.530 { 00:07:18.530 "params": { 00:07:18.530 "trtype": "pcie", 00:07:18.530 "traddr": "0000:00:10.0", 00:07:18.530 "name": "Nvme0" 00:07:18.530 }, 00:07:18.530 "method": "bdev_nvme_attach_controller" 00:07:18.530 }, 00:07:18.530 { 00:07:18.530 "params": { 00:07:18.530 "trtype": "pcie", 00:07:18.530 "traddr": "0000:00:11.0", 00:07:18.530 "name": "Nvme1" 00:07:18.530 }, 00:07:18.530 "method": "bdev_nvme_attach_controller" 00:07:18.530 }, 00:07:18.530 { 00:07:18.530 "method": "bdev_wait_for_examine" 00:07:18.530 } 00:07:18.530 ] 00:07:18.530 } 00:07:18.530 ] 00:07:18.530 } 00:07:18.787 [2024-11-20 15:55:16.783685] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:18.787 [2024-11-20 15:55:16.783797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61284 ] 00:07:18.787 [2024-11-20 15:55:16.938924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.787 [2024-11-20 15:55:16.994351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.044 [2024-11-20 15:55:17.049774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.044  [2024-11-20T15:55:17.552Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:19.302 00:07:19.302 15:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:19.302 15:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:19.302 15:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:19.302 15:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:19.302 15:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:19.302 15:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:19.302 15:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:19.302 15:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:19.302 15:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:19.302 15:55:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:19.302 [2024-11-20 15:55:17.484551] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:19.302 [2024-11-20 15:55:17.484636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61300 ] 00:07:19.302 { 00:07:19.302 "subsystems": [ 00:07:19.302 { 00:07:19.302 "subsystem": "bdev", 00:07:19.302 "config": [ 00:07:19.302 { 00:07:19.302 "params": { 00:07:19.302 "trtype": "pcie", 00:07:19.302 "traddr": "0000:00:10.0", 00:07:19.302 "name": "Nvme0" 00:07:19.302 }, 00:07:19.302 "method": "bdev_nvme_attach_controller" 00:07:19.302 }, 00:07:19.302 { 00:07:19.302 "params": { 00:07:19.302 "trtype": "pcie", 00:07:19.302 "traddr": "0000:00:11.0", 00:07:19.302 "name": "Nvme1" 00:07:19.302 }, 00:07:19.302 "method": "bdev_nvme_attach_controller" 00:07:19.302 }, 00:07:19.302 { 00:07:19.302 "method": "bdev_wait_for_examine" 00:07:19.302 } 00:07:19.302 ] 00:07:19.302 } 00:07:19.302 ] 00:07:19.302 } 00:07:19.559 [2024-11-20 15:55:17.627880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.559 [2024-11-20 15:55:17.691649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.559 [2024-11-20 15:55:17.747709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.817  [2024-11-20T15:55:18.326Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:07:20.076 00:07:20.076 15:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:20.076 ************************************ 00:07:20.076 END TEST spdk_dd_bdev_to_bdev 00:07:20.076 ************************************ 00:07:20.076 00:07:20.076 real 0m7.194s 00:07:20.076 user 0m5.252s 00:07:20.076 sys 0m3.403s 00:07:20.076 15:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.076 15:55:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:20.076 15:55:18 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:20.076 15:55:18 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:20.076 15:55:18 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.076 15:55:18 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.076 15:55:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:20.076 ************************************ 00:07:20.076 START TEST spdk_dd_uring 00:07:20.076 ************************************ 00:07:20.076 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:20.076 * Looking for test storage... 00:07:20.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:20.076 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:20.076 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:07:20.076 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:20.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.335 --rc genhtml_branch_coverage=1 00:07:20.335 --rc genhtml_function_coverage=1 00:07:20.335 --rc genhtml_legend=1 00:07:20.335 --rc geninfo_all_blocks=1 00:07:20.335 --rc geninfo_unexecuted_blocks=1 00:07:20.335 00:07:20.335 ' 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:20.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.335 --rc genhtml_branch_coverage=1 00:07:20.335 --rc genhtml_function_coverage=1 00:07:20.335 --rc genhtml_legend=1 00:07:20.335 --rc geninfo_all_blocks=1 00:07:20.335 --rc geninfo_unexecuted_blocks=1 00:07:20.335 00:07:20.335 ' 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:20.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.335 --rc genhtml_branch_coverage=1 00:07:20.335 --rc genhtml_function_coverage=1 00:07:20.335 --rc genhtml_legend=1 00:07:20.335 --rc geninfo_all_blocks=1 00:07:20.335 --rc geninfo_unexecuted_blocks=1 00:07:20.335 00:07:20.335 ' 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:20.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.335 --rc genhtml_branch_coverage=1 00:07:20.335 --rc genhtml_function_coverage=1 00:07:20.335 --rc genhtml_legend=1 00:07:20.335 --rc geninfo_all_blocks=1 00:07:20.335 --rc geninfo_unexecuted_blocks=1 00:07:20.335 00:07:20.335 ' 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:20.335 ************************************ 00:07:20.335 START TEST dd_uring_copy 00:07:20.335 ************************************ 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:20.335 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:20.336 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:20.336 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=yuh63b78fxvwfq1u57bjgwdhok0koxp1qxp1duk36px6nf5wdqsunhhsigkqj0q1srhvhtlq19ybwifip7oc7od0d250akhk3t8mll6npnal74h3bqmskbm59kbcmlsrl5si1embig76ts42vh06jmxigi90mven2eq32s5ey3uixegebyus6pn2z4igw0w45jvp8v5dt51krn3ogpqjgr2qlx1iyp1odxtu1a9ryesh8mqsdf7poqnftz3mrlia8j0nr5hs061ql5pcgvzlm0x9h5lzzd3i2fjgq3nm8mb02o1pnxrwdpkhlgqn9g0mn0swfrr2d7yhu1gdnljvt4gqjwa17gcuomj1f7dzkavq7bir17hx38gsafo058ct1lqipjx5ci0fxh6i6q53cyj94my4tio5haqjfol4vkejzao6xiz86lkymeljynszqalxd0kc13mf6ub4lkafeu8o7hi1zk8larwild05ixicjzm73ynk7ji7vdtxpdfwnidz0u4cu22ecgksuswq3shm98he32x8le2iremc0bhg8w3c4uolnw11h7zffnmbgkp602kohon2okt9dko59vpoqignhg7in4se6j3ujhc5bsj55tbnp2b0af5d6uzv3pv0wfxkz8bbl1kvafid0ghkb92mmyrmcqlng7k3bzi7bakhcqxbf0d36dxo28g7sjaj0ioroqacmlyyhy2s9nkej7hcaitxj030kycwpvkivv5ale163of2nwrfq5fu5nr30rskn6418clpaazl1kaqgb277e8y76jdk10u5a24po598bwez05oo4rmn645uevfl1mx3pi97oxuugozf88kcus80eid9p9h4qhfugufy0di78rw8o7io5niq95r7qsf0wg8vmj24q2pksfedy2tsyioa52gksrjz8bo2cv9alehrtn9o7u5v4nkhc9b0gv07yt7hfr176dvz7mu6pr6m3nl4l23suoyytaaxltk52vstexp3vnqoolbxxew 00:07:20.336 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo yuh63b78fxvwfq1u57bjgwdhok0koxp1qxp1duk36px6nf5wdqsunhhsigkqj0q1srhvhtlq19ybwifip7oc7od0d250akhk3t8mll6npnal74h3bqmskbm59kbcmlsrl5si1embig76ts42vh06jmxigi90mven2eq32s5ey3uixegebyus6pn2z4igw0w45jvp8v5dt51krn3ogpqjgr2qlx1iyp1odxtu1a9ryesh8mqsdf7poqnftz3mrlia8j0nr5hs061ql5pcgvzlm0x9h5lzzd3i2fjgq3nm8mb02o1pnxrwdpkhlgqn9g0mn0swfrr2d7yhu1gdnljvt4gqjwa17gcuomj1f7dzkavq7bir17hx38gsafo058ct1lqipjx5ci0fxh6i6q53cyj94my4tio5haqjfol4vkejzao6xiz86lkymeljynszqalxd0kc13mf6ub4lkafeu8o7hi1zk8larwild05ixicjzm73ynk7ji7vdtxpdfwnidz0u4cu22ecgksuswq3shm98he32x8le2iremc0bhg8w3c4uolnw11h7zffnmbgkp602kohon2okt9dko59vpoqignhg7in4se6j3ujhc5bsj55tbnp2b0af5d6uzv3pv0wfxkz8bbl1kvafid0ghkb92mmyrmcqlng7k3bzi7bakhcqxbf0d36dxo28g7sjaj0ioroqacmlyyhy2s9nkej7hcaitxj030kycwpvkivv5ale163of2nwrfq5fu5nr30rskn6418clpaazl1kaqgb277e8y76jdk10u5a24po598bwez05oo4rmn645uevfl1mx3pi97oxuugozf88kcus80eid9p9h4qhfugufy0di78rw8o7io5niq95r7qsf0wg8vmj24q2pksfedy2tsyioa52gksrjz8bo2cv9alehrtn9o7u5v4nkhc9b0gv07yt7hfr176dvz7mu6pr6m3nl4l23suoyytaaxltk52vstexp3vnqoolbxxew 00:07:20.336 15:55:18 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:20.336 [2024-11-20 15:55:18.476866] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:20.336 [2024-11-20 15:55:18.476948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61378 ] 00:07:20.595 [2024-11-20 15:55:18.619134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.595 [2024-11-20 15:55:18.677605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.595 [2024-11-20 15:55:18.735562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.536  [2024-11-20T15:55:20.044Z] Copying: 511/511 [MB] (average 1036 MBps) 00:07:21.794 00:07:21.794 15:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:21.794 15:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:21.794 15:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:21.794 15:55:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:21.794 [2024-11-20 15:55:19.894373] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:21.794 [2024-11-20 15:55:19.894472] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61399 ] 00:07:21.794 { 00:07:21.794 "subsystems": [ 00:07:21.794 { 00:07:21.794 "subsystem": "bdev", 00:07:21.794 "config": [ 00:07:21.794 { 00:07:21.794 "params": { 00:07:21.794 "block_size": 512, 00:07:21.794 "num_blocks": 1048576, 00:07:21.794 "name": "malloc0" 00:07:21.794 }, 00:07:21.794 "method": "bdev_malloc_create" 00:07:21.794 }, 00:07:21.794 { 00:07:21.794 "params": { 00:07:21.794 "filename": "/dev/zram1", 00:07:21.794 "name": "uring0" 00:07:21.794 }, 00:07:21.794 "method": "bdev_uring_create" 00:07:21.794 }, 00:07:21.794 { 00:07:21.794 "method": "bdev_wait_for_examine" 00:07:21.794 } 00:07:21.794 ] 00:07:21.794 } 00:07:21.794 ] 00:07:21.794 } 00:07:22.053 [2024-11-20 15:55:20.043457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.053 [2024-11-20 15:55:20.102851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.053 [2024-11-20 15:55:20.160165] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.428  [2024-11-20T15:55:22.613Z] Copying: 197/512 [MB] (197 MBps) [2024-11-20T15:55:23.179Z] Copying: 395/512 [MB] (198 MBps) [2024-11-20T15:55:23.437Z] Copying: 512/512 [MB] (average 198 MBps) 00:07:25.187 00:07:25.187 15:55:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:25.187 15:55:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:25.187 15:55:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:25.187 15:55:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:25.187 [2024-11-20 15:55:23.393506] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:25.187 [2024-11-20 15:55:23.393614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61456 ] 00:07:25.187 { 00:07:25.187 "subsystems": [ 00:07:25.187 { 00:07:25.187 "subsystem": "bdev", 00:07:25.187 "config": [ 00:07:25.187 { 00:07:25.187 "params": { 00:07:25.187 "block_size": 512, 00:07:25.187 "num_blocks": 1048576, 00:07:25.187 "name": "malloc0" 00:07:25.187 }, 00:07:25.187 "method": "bdev_malloc_create" 00:07:25.187 }, 00:07:25.187 { 00:07:25.187 "params": { 00:07:25.187 "filename": "/dev/zram1", 00:07:25.187 "name": "uring0" 00:07:25.187 }, 00:07:25.187 "method": "bdev_uring_create" 00:07:25.187 }, 00:07:25.187 { 00:07:25.187 "method": "bdev_wait_for_examine" 00:07:25.187 } 00:07:25.187 ] 00:07:25.187 } 00:07:25.187 ] 00:07:25.187 } 00:07:25.445 [2024-11-20 15:55:23.542242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.445 [2024-11-20 15:55:23.600163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.445 [2024-11-20 15:55:23.658319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.820  [2024-11-20T15:55:26.004Z] Copying: 168/512 [MB] (168 MBps) [2024-11-20T15:55:26.937Z] Copying: 328/512 [MB] (159 MBps) [2024-11-20T15:55:26.937Z] Copying: 501/512 [MB] (172 MBps) [2024-11-20T15:55:27.503Z] Copying: 512/512 [MB] (average 166 MBps) 00:07:29.253 00:07:29.253 15:55:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:29.253 15:55:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ yuh63b78fxvwfq1u57bjgwdhok0koxp1qxp1duk36px6nf5wdqsunhhsigkqj0q1srhvhtlq19ybwifip7oc7od0d250akhk3t8mll6npnal74h3bqmskbm59kbcmlsrl5si1embig76ts42vh06jmxigi90mven2eq32s5ey3uixegebyus6pn2z4igw0w45jvp8v5dt51krn3ogpqjgr2qlx1iyp1odxtu1a9ryesh8mqsdf7poqnftz3mrlia8j0nr5hs061ql5pcgvzlm0x9h5lzzd3i2fjgq3nm8mb02o1pnxrwdpkhlgqn9g0mn0swfrr2d7yhu1gdnljvt4gqjwa17gcuomj1f7dzkavq7bir17hx38gsafo058ct1lqipjx5ci0fxh6i6q53cyj94my4tio5haqjfol4vkejzao6xiz86lkymeljynszqalxd0kc13mf6ub4lkafeu8o7hi1zk8larwild05ixicjzm73ynk7ji7vdtxpdfwnidz0u4cu22ecgksuswq3shm98he32x8le2iremc0bhg8w3c4uolnw11h7zffnmbgkp602kohon2okt9dko59vpoqignhg7in4se6j3ujhc5bsj55tbnp2b0af5d6uzv3pv0wfxkz8bbl1kvafid0ghkb92mmyrmcqlng7k3bzi7bakhcqxbf0d36dxo28g7sjaj0ioroqacmlyyhy2s9nkej7hcaitxj030kycwpvkivv5ale163of2nwrfq5fu5nr30rskn6418clpaazl1kaqgb277e8y76jdk10u5a24po598bwez05oo4rmn645uevfl1mx3pi97oxuugozf88kcus80eid9p9h4qhfugufy0di78rw8o7io5niq95r7qsf0wg8vmj24q2pksfedy2tsyioa52gksrjz8bo2cv9alehrtn9o7u5v4nkhc9b0gv07yt7hfr176dvz7mu6pr6m3nl4l23suoyytaaxltk52vstexp3vnqoolbxxew == \y\u\h\6\3\b\7\8\f\x\v\w\f\q\1\u\5\7\b\j\g\w\d\h\o\k\0\k\o\x\p\1\q\x\p\1\d\u\k\3\6\p\x\6\n\f\5\w\d\q\s\u\n\h\h\s\i\g\k\q\j\0\q\1\s\r\h\v\h\t\l\q\1\9\y\b\w\i\f\i\p\7\o\c\7\o\d\0\d\2\5\0\a\k\h\k\3\t\8\m\l\l\6\n\p\n\a\l\7\4\h\3\b\q\m\s\k\b\m\5\9\k\b\c\m\l\s\r\l\5\s\i\1\e\m\b\i\g\7\6\t\s\4\2\v\h\0\6\j\m\x\i\g\i\9\0\m\v\e\n\2\e\q\3\2\s\5\e\y\3\u\i\x\e\g\e\b\y\u\s\6\p\n\2\z\4\i\g\w\0\w\4\5\j\v\p\8\v\5\d\t\5\1\k\r\n\3\o\g\p\q\j\g\r\2\q\l\x\1\i\y\p\1\o\d\x\t\u\1\a\9\r\y\e\s\h\8\m\q\s\d\f\7\p\o\q\n\f\t\z\3\m\r\l\i\a\8\j\0\n\r\5\h\s\0\6\1\q\l\5\p\c\g\v\z\l\m\0\x\9\h\5\l\z\z\d\3\i\2\f\j\g\q\3\n\m\8\m\b\0\2\o\1\p\n\x\r\w\d\p\k\h\l\g\q\n\9\g\0\m\n\0\s\w\f\r\r\2\d\7\y\h\u\1\g\d\n\l\j\v\t\4\g\q\j\w\a\1\7\g\c\u\o\m\j\1\f\7\d\z\k\a\v\q\7\b\i\r\1\7\h\x\3\8\g\s\a\f\o\0\5\8\c\t\1\l\q\i\p\j\x\5\c\i\0\f\x\h\6\i\6\q\5\3\c\y\j\9\4\m\y\4\t\i\o\5\h\a\q\j\f\o\l\4\v\k\e\j\z\a\o\6\x\i\z\8\6\l\k\y\m\e\l\j\y\n\s\z\q\a\l\x\d\0\k\c\1\3\m\f\6\u\b\4\l\k\a\f\e\u\8\o\7\h\i\1\z\k\8\l\a\r\w\i\l\d\0\5\i\x\i\c\j\z\m\7\3\y\n\k\7\j\i\7\v\d\t\x\p\d\f\w\n\i\d\z\0\u\4\c\u\2\2\e\c\g\k\s\u\s\w\q\3\s\h\m\9\8\h\e\3\2\x\8\l\e\2\i\r\e\m\c\0\b\h\g\8\w\3\c\4\u\o\l\n\w\1\1\h\7\z\f\f\n\m\b\g\k\p\6\0\2\k\o\h\o\n\2\o\k\t\9\d\k\o\5\9\v\p\o\q\i\g\n\h\g\7\i\n\4\s\e\6\j\3\u\j\h\c\5\b\s\j\5\5\t\b\n\p\2\b\0\a\f\5\d\6\u\z\v\3\p\v\0\w\f\x\k\z\8\b\b\l\1\k\v\a\f\i\d\0\g\h\k\b\9\2\m\m\y\r\m\c\q\l\n\g\7\k\3\b\z\i\7\b\a\k\h\c\q\x\b\f\0\d\3\6\d\x\o\2\8\g\7\s\j\a\j\0\i\o\r\o\q\a\c\m\l\y\y\h\y\2\s\9\n\k\e\j\7\h\c\a\i\t\x\j\0\3\0\k\y\c\w\p\v\k\i\v\v\5\a\l\e\1\6\3\o\f\2\n\w\r\f\q\5\f\u\5\n\r\3\0\r\s\k\n\6\4\1\8\c\l\p\a\a\z\l\1\k\a\q\g\b\2\7\7\e\8\y\7\6\j\d\k\1\0\u\5\a\2\4\p\o\5\9\8\b\w\e\z\0\5\o\o\4\r\m\n\6\4\5\u\e\v\f\l\1\m\x\3\p\i\9\7\o\x\u\u\g\o\z\f\8\8\k\c\u\s\8\0\e\i\d\9\p\9\h\4\q\h\f\u\g\u\f\y\0\d\i\7\8\r\w\8\o\7\i\o\5\n\i\q\9\5\r\7\q\s\f\0\w\g\8\v\m\j\2\4\q\2\p\k\s\f\e\d\y\2\t\s\y\i\o\a\5\2\g\k\s\r\j\z\8\b\o\2\c\v\9\a\l\e\h\r\t\n\9\o\7\u\5\v\4\n\k\h\c\9\b\0\g\v\0\7\y\t\7\h\f\r\1\7\6\d\v\z\7\m\u\6\p\r\6\m\3\n\l\4\l\2\3\s\u\o\y\y\t\a\a\x\l\t\k\5\2\v\s\t\e\x\p\3\v\n\q\o\o\l\b\x\x\e\w ]] 00:07:29.253 15:55:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:29.253 15:55:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ yuh63b78fxvwfq1u57bjgwdhok0koxp1qxp1duk36px6nf5wdqsunhhsigkqj0q1srhvhtlq19ybwifip7oc7od0d250akhk3t8mll6npnal74h3bqmskbm59kbcmlsrl5si1embig76ts42vh06jmxigi90mven2eq32s5ey3uixegebyus6pn2z4igw0w45jvp8v5dt51krn3ogpqjgr2qlx1iyp1odxtu1a9ryesh8mqsdf7poqnftz3mrlia8j0nr5hs061ql5pcgvzlm0x9h5lzzd3i2fjgq3nm8mb02o1pnxrwdpkhlgqn9g0mn0swfrr2d7yhu1gdnljvt4gqjwa17gcuomj1f7dzkavq7bir17hx38gsafo058ct1lqipjx5ci0fxh6i6q53cyj94my4tio5haqjfol4vkejzao6xiz86lkymeljynszqalxd0kc13mf6ub4lkafeu8o7hi1zk8larwild05ixicjzm73ynk7ji7vdtxpdfwnidz0u4cu22ecgksuswq3shm98he32x8le2iremc0bhg8w3c4uolnw11h7zffnmbgkp602kohon2okt9dko59vpoqignhg7in4se6j3ujhc5bsj55tbnp2b0af5d6uzv3pv0wfxkz8bbl1kvafid0ghkb92mmyrmcqlng7k3bzi7bakhcqxbf0d36dxo28g7sjaj0ioroqacmlyyhy2s9nkej7hcaitxj030kycwpvkivv5ale163of2nwrfq5fu5nr30rskn6418clpaazl1kaqgb277e8y76jdk10u5a24po598bwez05oo4rmn645uevfl1mx3pi97oxuugozf88kcus80eid9p9h4qhfugufy0di78rw8o7io5niq95r7qsf0wg8vmj24q2pksfedy2tsyioa52gksrjz8bo2cv9alehrtn9o7u5v4nkhc9b0gv07yt7hfr176dvz7mu6pr6m3nl4l23suoyytaaxltk52vstexp3vnqoolbxxew == \y\u\h\6\3\b\7\8\f\x\v\w\f\q\1\u\5\7\b\j\g\w\d\h\o\k\0\k\o\x\p\1\q\x\p\1\d\u\k\3\6\p\x\6\n\f\5\w\d\q\s\u\n\h\h\s\i\g\k\q\j\0\q\1\s\r\h\v\h\t\l\q\1\9\y\b\w\i\f\i\p\7\o\c\7\o\d\0\d\2\5\0\a\k\h\k\3\t\8\m\l\l\6\n\p\n\a\l\7\4\h\3\b\q\m\s\k\b\m\5\9\k\b\c\m\l\s\r\l\5\s\i\1\e\m\b\i\g\7\6\t\s\4\2\v\h\0\6\j\m\x\i\g\i\9\0\m\v\e\n\2\e\q\3\2\s\5\e\y\3\u\i\x\e\g\e\b\y\u\s\6\p\n\2\z\4\i\g\w\0\w\4\5\j\v\p\8\v\5\d\t\5\1\k\r\n\3\o\g\p\q\j\g\r\2\q\l\x\1\i\y\p\1\o\d\x\t\u\1\a\9\r\y\e\s\h\8\m\q\s\d\f\7\p\o\q\n\f\t\z\3\m\r\l\i\a\8\j\0\n\r\5\h\s\0\6\1\q\l\5\p\c\g\v\z\l\m\0\x\9\h\5\l\z\z\d\3\i\2\f\j\g\q\3\n\m\8\m\b\0\2\o\1\p\n\x\r\w\d\p\k\h\l\g\q\n\9\g\0\m\n\0\s\w\f\r\r\2\d\7\y\h\u\1\g\d\n\l\j\v\t\4\g\q\j\w\a\1\7\g\c\u\o\m\j\1\f\7\d\z\k\a\v\q\7\b\i\r\1\7\h\x\3\8\g\s\a\f\o\0\5\8\c\t\1\l\q\i\p\j\x\5\c\i\0\f\x\h\6\i\6\q\5\3\c\y\j\9\4\m\y\4\t\i\o\5\h\a\q\j\f\o\l\4\v\k\e\j\z\a\o\6\x\i\z\8\6\l\k\y\m\e\l\j\y\n\s\z\q\a\l\x\d\0\k\c\1\3\m\f\6\u\b\4\l\k\a\f\e\u\8\o\7\h\i\1\z\k\8\l\a\r\w\i\l\d\0\5\i\x\i\c\j\z\m\7\3\y\n\k\7\j\i\7\v\d\t\x\p\d\f\w\n\i\d\z\0\u\4\c\u\2\2\e\c\g\k\s\u\s\w\q\3\s\h\m\9\8\h\e\3\2\x\8\l\e\2\i\r\e\m\c\0\b\h\g\8\w\3\c\4\u\o\l\n\w\1\1\h\7\z\f\f\n\m\b\g\k\p\6\0\2\k\o\h\o\n\2\o\k\t\9\d\k\o\5\9\v\p\o\q\i\g\n\h\g\7\i\n\4\s\e\6\j\3\u\j\h\c\5\b\s\j\5\5\t\b\n\p\2\b\0\a\f\5\d\6\u\z\v\3\p\v\0\w\f\x\k\z\8\b\b\l\1\k\v\a\f\i\d\0\g\h\k\b\9\2\m\m\y\r\m\c\q\l\n\g\7\k\3\b\z\i\7\b\a\k\h\c\q\x\b\f\0\d\3\6\d\x\o\2\8\g\7\s\j\a\j\0\i\o\r\o\q\a\c\m\l\y\y\h\y\2\s\9\n\k\e\j\7\h\c\a\i\t\x\j\0\3\0\k\y\c\w\p\v\k\i\v\v\5\a\l\e\1\6\3\o\f\2\n\w\r\f\q\5\f\u\5\n\r\3\0\r\s\k\n\6\4\1\8\c\l\p\a\a\z\l\1\k\a\q\g\b\2\7\7\e\8\y\7\6\j\d\k\1\0\u\5\a\2\4\p\o\5\9\8\b\w\e\z\0\5\o\o\4\r\m\n\6\4\5\u\e\v\f\l\1\m\x\3\p\i\9\7\o\x\u\u\g\o\z\f\8\8\k\c\u\s\8\0\e\i\d\9\p\9\h\4\q\h\f\u\g\u\f\y\0\d\i\7\8\r\w\8\o\7\i\o\5\n\i\q\9\5\r\7\q\s\f\0\w\g\8\v\m\j\2\4\q\2\p\k\s\f\e\d\y\2\t\s\y\i\o\a\5\2\g\k\s\r\j\z\8\b\o\2\c\v\9\a\l\e\h\r\t\n\9\o\7\u\5\v\4\n\k\h\c\9\b\0\g\v\0\7\y\t\7\h\f\r\1\7\6\d\v\z\7\m\u\6\p\r\6\m\3\n\l\4\l\2\3\s\u\o\y\y\t\a\a\x\l\t\k\5\2\v\s\t\e\x\p\3\v\n\q\o\o\l\b\x\x\e\w ]] 00:07:29.253 15:55:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:29.819 15:55:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:29.819 15:55:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:29.819 15:55:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:29.819 15:55:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:29.819 { 00:07:29.819 "subsystems": [ 00:07:29.819 { 00:07:29.819 "subsystem": "bdev", 00:07:29.819 "config": [ 00:07:29.819 { 00:07:29.819 "params": { 00:07:29.819 "block_size": 512, 00:07:29.819 "num_blocks": 1048576, 00:07:29.819 "name": "malloc0" 00:07:29.819 }, 00:07:29.819 "method": "bdev_malloc_create" 00:07:29.819 }, 00:07:29.819 { 00:07:29.819 "params": { 00:07:29.819 "filename": "/dev/zram1", 00:07:29.819 "name": "uring0" 00:07:29.819 }, 00:07:29.819 "method": "bdev_uring_create" 00:07:29.819 }, 00:07:29.819 { 00:07:29.819 "method": "bdev_wait_for_examine" 00:07:29.819 } 00:07:29.819 ] 00:07:29.819 } 00:07:29.819 ] 00:07:29.819 } 00:07:29.819 [2024-11-20 15:55:27.903277] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:29.819 [2024-11-20 15:55:27.903415] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61522 ] 00:07:29.819 [2024-11-20 15:55:28.057882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.090 [2024-11-20 15:55:28.116203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.090 [2024-11-20 15:55:28.173216] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.462  [2024-11-20T15:55:30.647Z] Copying: 142/512 [MB] (142 MBps) [2024-11-20T15:55:31.582Z] Copying: 283/512 [MB] (140 MBps) [2024-11-20T15:55:32.147Z] Copying: 420/512 [MB] (137 MBps) [2024-11-20T15:55:32.713Z] Copying: 512/512 [MB] (average 140 MBps) 00:07:34.463 00:07:34.463 15:55:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:34.463 15:55:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:34.463 15:55:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:34.463 15:55:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:34.463 15:55:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:34.463 15:55:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:34.463 15:55:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:34.463 15:55:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:34.463 [2024-11-20 15:55:32.488146] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:34.463 [2024-11-20 15:55:32.488246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61578 ] 00:07:34.463 { 00:07:34.463 "subsystems": [ 00:07:34.463 { 00:07:34.463 "subsystem": "bdev", 00:07:34.463 "config": [ 00:07:34.463 { 00:07:34.463 "params": { 00:07:34.463 "block_size": 512, 00:07:34.463 "num_blocks": 1048576, 00:07:34.463 "name": "malloc0" 00:07:34.463 }, 00:07:34.463 "method": "bdev_malloc_create" 00:07:34.463 }, 00:07:34.463 { 00:07:34.463 "params": { 00:07:34.463 "filename": "/dev/zram1", 00:07:34.463 "name": "uring0" 00:07:34.463 }, 00:07:34.463 "method": "bdev_uring_create" 00:07:34.463 }, 00:07:34.463 { 00:07:34.463 "params": { 00:07:34.463 "name": "uring0" 00:07:34.463 }, 00:07:34.463 "method": "bdev_uring_delete" 00:07:34.463 }, 00:07:34.463 { 00:07:34.463 "method": "bdev_wait_for_examine" 00:07:34.463 } 00:07:34.463 ] 00:07:34.463 } 00:07:34.463 ] 00:07:34.463 } 00:07:34.463 [2024-11-20 15:55:32.639958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.463 [2024-11-20 15:55:32.710621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.732 [2024-11-20 15:55:32.771119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.996  [2024-11-20T15:55:33.504Z] Copying: 0/0 [B] (average 0 Bps) 00:07:35.254 00:07:35.254 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:35.254 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:35.254 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:35.254 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:07:35.254 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:35.254 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:35.254 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:35.254 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.254 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.255 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.255 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.255 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.255 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.255 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.255 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.255 15:55:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:35.255 { 00:07:35.255 "subsystems": [ 00:07:35.255 { 00:07:35.255 "subsystem": "bdev", 00:07:35.255 "config": [ 00:07:35.255 { 00:07:35.255 "params": { 00:07:35.255 "block_size": 512, 00:07:35.255 "num_blocks": 1048576, 00:07:35.255 "name": "malloc0" 00:07:35.255 }, 00:07:35.255 "method": "bdev_malloc_create" 00:07:35.255 }, 00:07:35.255 { 00:07:35.255 "params": { 00:07:35.255 "filename": "/dev/zram1", 00:07:35.255 "name": "uring0" 00:07:35.255 }, 00:07:35.255 "method": "bdev_uring_create" 00:07:35.255 }, 00:07:35.255 { 00:07:35.255 "params": { 00:07:35.255 "name": "uring0" 00:07:35.255 }, 00:07:35.255 "method": "bdev_uring_delete" 00:07:35.255 }, 00:07:35.255 { 00:07:35.255 "method": "bdev_wait_for_examine" 00:07:35.255 } 00:07:35.255 ] 00:07:35.255 } 00:07:35.255 ] 00:07:35.255 } 00:07:35.255 [2024-11-20 15:55:33.462483] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:35.255 [2024-11-20 15:55:33.462625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61609 ] 00:07:35.513 [2024-11-20 15:55:33.623199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.513 [2024-11-20 15:55:33.700150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.513 [2024-11-20 15:55:33.758911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.772 [2024-11-20 15:55:33.983976] bdev.c:8626:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:35.772 [2024-11-20 15:55:33.984044] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:35.772 [2024-11-20 15:55:33.984059] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:35.772 [2024-11-20 15:55:33.984071] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.340 [2024-11-20 15:55:34.296867] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:36.340 00:07:36.340 real 0m16.186s 00:07:36.340 user 0m10.780s 00:07:36.340 sys 0m13.838s 00:07:36.340 ************************************ 00:07:36.340 END TEST dd_uring_copy 00:07:36.340 ************************************ 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.340 15:55:34 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:36.599 00:07:36.599 real 0m16.422s 00:07:36.599 user 0m10.906s 00:07:36.599 sys 0m13.951s 00:07:36.599 15:55:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.599 ************************************ 00:07:36.599 END TEST spdk_dd_uring 00:07:36.599 ************************************ 00:07:36.599 15:55:34 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:36.599 15:55:34 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:36.599 15:55:34 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.599 15:55:34 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.599 15:55:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:36.599 ************************************ 00:07:36.599 START TEST spdk_dd_sparse 00:07:36.599 ************************************ 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:36.599 * Looking for test storage... 00:07:36.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.599 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:07:36.858 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.858 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.858 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.858 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:07:36.858 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.858 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.858 --rc genhtml_branch_coverage=1 00:07:36.858 --rc genhtml_function_coverage=1 00:07:36.858 --rc genhtml_legend=1 00:07:36.858 --rc geninfo_all_blocks=1 00:07:36.859 --rc geninfo_unexecuted_blocks=1 00:07:36.859 00:07:36.859 ' 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.859 --rc genhtml_branch_coverage=1 00:07:36.859 --rc genhtml_function_coverage=1 00:07:36.859 --rc genhtml_legend=1 00:07:36.859 --rc geninfo_all_blocks=1 00:07:36.859 --rc geninfo_unexecuted_blocks=1 00:07:36.859 00:07:36.859 ' 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.859 --rc genhtml_branch_coverage=1 00:07:36.859 --rc genhtml_function_coverage=1 00:07:36.859 --rc genhtml_legend=1 00:07:36.859 --rc geninfo_all_blocks=1 00:07:36.859 --rc geninfo_unexecuted_blocks=1 00:07:36.859 00:07:36.859 ' 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.859 --rc genhtml_branch_coverage=1 00:07:36.859 --rc genhtml_function_coverage=1 00:07:36.859 --rc genhtml_legend=1 00:07:36.859 --rc geninfo_all_blocks=1 00:07:36.859 --rc geninfo_unexecuted_blocks=1 00:07:36.859 00:07:36.859 ' 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:36.859 1+0 records in 00:07:36.859 1+0 records out 00:07:36.859 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00628579 s, 667 MB/s 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:36.859 1+0 records in 00:07:36.859 1+0 records out 00:07:36.859 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00911664 s, 460 MB/s 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:36.859 1+0 records in 00:07:36.859 1+0 records out 00:07:36.859 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00636388 s, 659 MB/s 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:36.859 ************************************ 00:07:36.859 START TEST dd_sparse_file_to_file 00:07:36.859 ************************************ 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:36.859 15:55:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:36.859 [2024-11-20 15:55:34.943004] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:36.859 [2024-11-20 15:55:34.943121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61712 ] 00:07:36.859 { 00:07:36.859 "subsystems": [ 00:07:36.859 { 00:07:36.859 "subsystem": "bdev", 00:07:36.859 "config": [ 00:07:36.859 { 00:07:36.859 "params": { 00:07:36.859 "block_size": 4096, 00:07:36.859 "filename": "dd_sparse_aio_disk", 00:07:36.859 "name": "dd_aio" 00:07:36.859 }, 00:07:36.859 "method": "bdev_aio_create" 00:07:36.859 }, 00:07:36.859 { 00:07:36.859 "params": { 00:07:36.859 "lvs_name": "dd_lvstore", 00:07:36.859 "bdev_name": "dd_aio" 00:07:36.859 }, 00:07:36.859 "method": "bdev_lvol_create_lvstore" 00:07:36.859 }, 00:07:36.859 { 00:07:36.859 "method": "bdev_wait_for_examine" 00:07:36.859 } 00:07:36.859 ] 00:07:36.859 } 00:07:36.859 ] 00:07:36.859 } 00:07:36.859 [2024-11-20 15:55:35.089434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.117 [2024-11-20 15:55:35.157555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.117 [2024-11-20 15:55:35.216912] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.117  [2024-11-20T15:55:35.626Z] Copying: 12/36 [MB] (average 1200 MBps) 00:07:37.376 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:37.376 ************************************ 00:07:37.376 END TEST dd_sparse_file_to_file 00:07:37.376 ************************************ 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:37.376 00:07:37.376 real 0m0.671s 00:07:37.376 user 0m0.426s 00:07:37.376 sys 0m0.360s 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:37.376 ************************************ 00:07:37.376 START TEST dd_sparse_file_to_bdev 00:07:37.376 ************************************ 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:37.376 15:55:35 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:37.633 [2024-11-20 15:55:35.670008] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:37.633 [2024-11-20 15:55:35.670105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61749 ] 00:07:37.633 { 00:07:37.633 "subsystems": [ 00:07:37.633 { 00:07:37.633 "subsystem": "bdev", 00:07:37.633 "config": [ 00:07:37.633 { 00:07:37.633 "params": { 00:07:37.634 "block_size": 4096, 00:07:37.634 "filename": "dd_sparse_aio_disk", 00:07:37.634 "name": "dd_aio" 00:07:37.634 }, 00:07:37.634 "method": "bdev_aio_create" 00:07:37.634 }, 00:07:37.634 { 00:07:37.634 "params": { 00:07:37.634 "lvs_name": "dd_lvstore", 00:07:37.634 "lvol_name": "dd_lvol", 00:07:37.634 "size_in_mib": 36, 00:07:37.634 "thin_provision": true 00:07:37.634 }, 00:07:37.634 "method": "bdev_lvol_create" 00:07:37.634 }, 00:07:37.634 { 00:07:37.634 "method": "bdev_wait_for_examine" 00:07:37.634 } 00:07:37.634 ] 00:07:37.634 } 00:07:37.634 ] 00:07:37.634 } 00:07:37.634 [2024-11-20 15:55:35.819842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.634 [2024-11-20 15:55:35.869871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.890 [2024-11-20 15:55:35.927865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.890  [2024-11-20T15:55:36.402Z] Copying: 12/36 [MB] (average 500 MBps) 00:07:38.152 00:07:38.152 00:07:38.152 real 0m0.634s 00:07:38.152 user 0m0.400s 00:07:38.152 sys 0m0.346s 00:07:38.152 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.152 ************************************ 00:07:38.152 END TEST dd_sparse_file_to_bdev 00:07:38.152 ************************************ 00:07:38.152 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:38.152 15:55:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:38.152 15:55:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.152 15:55:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.153 15:55:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:38.153 ************************************ 00:07:38.153 START TEST dd_sparse_bdev_to_file 00:07:38.153 ************************************ 00:07:38.153 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:07:38.153 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:38.153 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:38.153 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:38.153 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:38.153 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:38.153 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:38.153 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:38.153 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:38.153 { 00:07:38.153 "subsystems": [ 00:07:38.153 { 00:07:38.153 "subsystem": "bdev", 00:07:38.153 "config": [ 00:07:38.153 { 00:07:38.153 "params": { 00:07:38.153 "block_size": 4096, 00:07:38.153 "filename": "dd_sparse_aio_disk", 00:07:38.153 "name": "dd_aio" 00:07:38.153 }, 00:07:38.153 "method": "bdev_aio_create" 00:07:38.153 }, 00:07:38.153 { 00:07:38.153 "method": "bdev_wait_for_examine" 00:07:38.153 } 00:07:38.153 ] 00:07:38.153 } 00:07:38.153 ] 00:07:38.153 } 00:07:38.153 [2024-11-20 15:55:36.355271] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:38.153 [2024-11-20 15:55:36.355396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61787 ] 00:07:38.411 [2024-11-20 15:55:36.509332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.411 [2024-11-20 15:55:36.578941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.411 [2024-11-20 15:55:36.641668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.670  [2024-11-20T15:55:37.178Z] Copying: 12/36 [MB] (average 923 MBps) 00:07:38.928 00:07:38.928 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:38.928 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:38.928 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:38.928 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:38.928 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:38.928 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:38.928 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:38.928 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:38.928 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:38.928 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:38.928 00:07:38.928 real 0m0.668s 00:07:38.928 user 0m0.397s 00:07:38.928 sys 0m0.374s 00:07:38.928 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.928 ************************************ 00:07:38.928 END TEST dd_sparse_bdev_to_file 00:07:38.928 ************************************ 00:07:38.928 15:55:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:38.928 15:55:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:38.928 15:55:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:38.928 15:55:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:38.928 15:55:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:38.928 15:55:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:38.928 00:07:38.928 real 0m2.363s 00:07:38.928 user 0m1.394s 00:07:38.928 sys 0m1.296s 00:07:38.928 15:55:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.928 ************************************ 00:07:38.928 END TEST spdk_dd_sparse 00:07:38.928 ************************************ 00:07:38.928 15:55:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:38.928 15:55:37 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:38.928 15:55:37 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.928 15:55:37 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.928 15:55:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:38.928 ************************************ 00:07:38.928 START TEST spdk_dd_negative 00:07:38.928 ************************************ 00:07:38.928 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:38.928 * Looking for test storage... 00:07:38.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:38.928 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:38.928 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:07:38.928 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.187 --rc genhtml_branch_coverage=1 00:07:39.187 --rc genhtml_function_coverage=1 00:07:39.187 --rc genhtml_legend=1 00:07:39.187 --rc geninfo_all_blocks=1 00:07:39.187 --rc geninfo_unexecuted_blocks=1 00:07:39.187 00:07:39.187 ' 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.187 --rc genhtml_branch_coverage=1 00:07:39.187 --rc genhtml_function_coverage=1 00:07:39.187 --rc genhtml_legend=1 00:07:39.187 --rc geninfo_all_blocks=1 00:07:39.187 --rc geninfo_unexecuted_blocks=1 00:07:39.187 00:07:39.187 ' 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.187 --rc genhtml_branch_coverage=1 00:07:39.187 --rc genhtml_function_coverage=1 00:07:39.187 --rc genhtml_legend=1 00:07:39.187 --rc geninfo_all_blocks=1 00:07:39.187 --rc geninfo_unexecuted_blocks=1 00:07:39.187 00:07:39.187 ' 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.187 --rc genhtml_branch_coverage=1 00:07:39.187 --rc genhtml_function_coverage=1 00:07:39.187 --rc genhtml_legend=1 00:07:39.187 --rc geninfo_all_blocks=1 00:07:39.187 --rc geninfo_unexecuted_blocks=1 00:07:39.187 00:07:39.187 ' 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:39.187 ************************************ 00:07:39.187 START TEST dd_invalid_arguments 00:07:39.187 ************************************ 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.187 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:39.187 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:39.187 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:39.187 00:07:39.187 CPU options: 00:07:39.187 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:39.187 (like [0,1,10]) 00:07:39.187 --lcores lcore to CPU mapping list. The list is in the format: 00:07:39.187 [<,lcores[@CPUs]>...] 00:07:39.187 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:39.187 Within the group, '-' is used for range separator, 00:07:39.187 ',' is used for single number separator. 00:07:39.187 '( )' can be omitted for single element group, 00:07:39.187 '@' can be omitted if cpus and lcores have the same value 00:07:39.187 --disable-cpumask-locks Disable CPU core lock files. 00:07:39.187 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:39.187 pollers in the app support interrupt mode) 00:07:39.187 -p, --main-core main (primary) core for DPDK 00:07:39.187 00:07:39.187 Configuration options: 00:07:39.188 -c, --config, --json JSON config file 00:07:39.188 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:39.188 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:39.188 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:39.188 --rpcs-allowed comma-separated list of permitted RPCS 00:07:39.188 --json-ignore-init-errors don't exit on invalid config entry 00:07:39.188 00:07:39.188 Memory options: 00:07:39.188 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:39.188 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:39.188 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:39.188 -R, --huge-unlink unlink huge files after initialization 00:07:39.188 -n, --mem-channels number of memory channels used for DPDK 00:07:39.188 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:39.188 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:39.188 --no-huge run without using hugepages 00:07:39.188 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:07:39.188 -i, --shm-id shared memory ID (optional) 00:07:39.188 -g, --single-file-segments force creating just one hugetlbfs file 00:07:39.188 00:07:39.188 PCI options: 00:07:39.188 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:39.188 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:39.188 -u, --no-pci disable PCI access 00:07:39.188 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:39.188 00:07:39.188 Log options: 00:07:39.188 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:39.188 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:39.188 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:39.188 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:39.188 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:07:39.188 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:07:39.188 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:07:39.188 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:07:39.188 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:07:39.188 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:07:39.188 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:07:39.188 --silence-noticelog disable notice level logging to stderr 00:07:39.188 00:07:39.188 Trace options: 00:07:39.188 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:39.188 setting 0 to disable trace (default 32768) 00:07:39.188 Tracepoints vary in size and can use more than one trace entry. 00:07:39.188 -e, --tpoint-group [:] 00:07:39.188 [2024-11-20 15:55:37.339366] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:39.188 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:39.188 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:07:39.188 bdev_raid, scheduler, all). 00:07:39.188 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:39.188 a tracepoint group. First tpoint inside a group can be enabled by 00:07:39.188 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:39.188 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:39.188 in /include/spdk_internal/trace_defs.h 00:07:39.188 00:07:39.188 Other options: 00:07:39.188 -h, --help show this usage 00:07:39.188 -v, --version print SPDK version 00:07:39.188 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:39.188 --env-context Opaque context for use of the env implementation 00:07:39.188 00:07:39.188 Application specific: 00:07:39.188 [--------- DD Options ---------] 00:07:39.188 --if Input file. Must specify either --if or --ib. 00:07:39.188 --ib Input bdev. Must specifier either --if or --ib 00:07:39.188 --of Output file. Must specify either --of or --ob. 00:07:39.188 --ob Output bdev. Must specify either --of or --ob. 00:07:39.188 --iflag Input file flags. 00:07:39.188 --oflag Output file flags. 00:07:39.188 --bs I/O unit size (default: 4096) 00:07:39.188 --qd Queue depth (default: 2) 00:07:39.188 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:39.188 --skip Skip this many I/O units at start of input. (default: 0) 00:07:39.188 --seek Skip this many I/O units at start of output. (default: 0) 00:07:39.188 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:39.188 --sparse Enable hole skipping in input target 00:07:39.188 Available iflag and oflag values: 00:07:39.188 append - append mode 00:07:39.188 direct - use direct I/O for data 00:07:39.188 directory - fail unless a directory 00:07:39.188 dsync - use synchronized I/O for data 00:07:39.188 noatime - do not update access time 00:07:39.188 noctty - do not assign controlling terminal from file 00:07:39.188 nofollow - do not follow symlinks 00:07:39.188 nonblock - use non-blocking I/O 00:07:39.188 sync - use synchronized I/O for data and metadata 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.188 00:07:39.188 real 0m0.070s 00:07:39.188 user 0m0.041s 00:07:39.188 sys 0m0.028s 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:39.188 ************************************ 00:07:39.188 END TEST dd_invalid_arguments 00:07:39.188 ************************************ 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:39.188 ************************************ 00:07:39.188 START TEST dd_double_input 00:07:39.188 ************************************ 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.188 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:39.446 [2024-11-20 15:55:37.450966] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.446 00:07:39.446 real 0m0.060s 00:07:39.446 user 0m0.040s 00:07:39.446 sys 0m0.020s 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:39.446 ************************************ 00:07:39.446 END TEST dd_double_input 00:07:39.446 ************************************ 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:39.446 ************************************ 00:07:39.446 START TEST dd_double_output 00:07:39.446 ************************************ 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:39.446 [2024-11-20 15:55:37.569687] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.446 00:07:39.446 real 0m0.083s 00:07:39.446 user 0m0.054s 00:07:39.446 sys 0m0.029s 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:39.446 ************************************ 00:07:39.446 END TEST dd_double_output 00:07:39.446 ************************************ 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:39.446 ************************************ 00:07:39.446 START TEST dd_no_input 00:07:39.446 ************************************ 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.446 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:39.704 [2024-11-20 15:55:37.700747] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.704 00:07:39.704 real 0m0.081s 00:07:39.704 user 0m0.054s 00:07:39.704 sys 0m0.026s 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:39.704 ************************************ 00:07:39.704 END TEST dd_no_input 00:07:39.704 ************************************ 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:39.704 ************************************ 00:07:39.704 START TEST dd_no_output 00:07:39.704 ************************************ 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.704 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:39.705 [2024-11-20 15:55:37.836934] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.705 00:07:39.705 real 0m0.086s 00:07:39.705 user 0m0.046s 00:07:39.705 sys 0m0.037s 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:39.705 ************************************ 00:07:39.705 END TEST dd_no_output 00:07:39.705 ************************************ 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:39.705 ************************************ 00:07:39.705 START TEST dd_wrong_blocksize 00:07:39.705 ************************************ 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.705 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:39.962 [2024-11-20 15:55:37.968712] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:39.962 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:07:39.962 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.962 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.962 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.962 00:07:39.962 real 0m0.081s 00:07:39.962 user 0m0.052s 00:07:39.963 sys 0m0.026s 00:07:39.963 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.963 15:55:37 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:39.963 ************************************ 00:07:39.963 END TEST dd_wrong_blocksize 00:07:39.963 ************************************ 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:39.963 ************************************ 00:07:39.963 START TEST dd_smaller_blocksize 00:07:39.963 ************************************ 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:39.963 15:55:38 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:39.963 [2024-11-20 15:55:38.108512] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:39.963 [2024-11-20 15:55:38.108632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62019 ] 00:07:40.220 [2024-11-20 15:55:38.260799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.220 [2024-11-20 15:55:38.351473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.220 [2024-11-20 15:55:38.413437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.788 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:40.788 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:41.047 [2024-11-20 15:55:39.057266] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:41.047 [2024-11-20 15:55:39.057336] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.047 [2024-11-20 15:55:39.183671] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:41.047 15:55:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:07:41.047 15:55:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.047 15:55:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:07:41.047 15:55:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:07:41.047 15:55:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:07:41.047 15:55:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.047 00:07:41.047 real 0m1.207s 00:07:41.047 user 0m0.420s 00:07:41.047 sys 0m0.676s 00:07:41.047 15:55:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.047 15:55:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:41.047 ************************************ 00:07:41.047 END TEST dd_smaller_blocksize 00:07:41.047 ************************************ 00:07:41.047 15:55:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:07:41.047 15:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.047 15:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.047 15:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:41.316 ************************************ 00:07:41.316 START TEST dd_invalid_count 00:07:41.316 ************************************ 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:41.316 [2024-11-20 15:55:39.365600] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.316 00:07:41.316 real 0m0.081s 00:07:41.316 user 0m0.039s 00:07:41.316 sys 0m0.038s 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:41.316 ************************************ 00:07:41.316 END TEST dd_invalid_count 00:07:41.316 ************************************ 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:41.316 ************************************ 00:07:41.316 START TEST dd_invalid_oflag 00:07:41.316 ************************************ 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.316 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.317 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:41.317 [2024-11-20 15:55:39.495092] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:41.317 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:07:41.317 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.317 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:41.317 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.317 00:07:41.317 real 0m0.079s 00:07:41.317 user 0m0.038s 00:07:41.317 sys 0m0.037s 00:07:41.317 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.317 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:41.317 ************************************ 00:07:41.317 END TEST dd_invalid_oflag 00:07:41.317 ************************************ 00:07:41.317 15:55:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:07:41.317 15:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.317 15:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.317 15:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:41.574 ************************************ 00:07:41.574 START TEST dd_invalid_iflag 00:07:41.574 ************************************ 00:07:41.574 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:41.575 [2024-11-20 15:55:39.622006] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.575 00:07:41.575 real 0m0.071s 00:07:41.575 user 0m0.037s 00:07:41.575 sys 0m0.032s 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:41.575 ************************************ 00:07:41.575 END TEST dd_invalid_iflag 00:07:41.575 ************************************ 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:41.575 ************************************ 00:07:41.575 START TEST dd_unknown_flag 00:07:41.575 ************************************ 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:41.575 15:55:39 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:41.575 [2024-11-20 15:55:39.754370] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:41.575 [2024-11-20 15:55:39.754473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62111 ] 00:07:41.833 [2024-11-20 15:55:39.905534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.833 [2024-11-20 15:55:39.975474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.833 [2024-11-20 15:55:40.034951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.833 [2024-11-20 15:55:40.076497] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:41.833 [2024-11-20 15:55:40.076586] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.833 [2024-11-20 15:55:40.076686] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:41.833 [2024-11-20 15:55:40.076711] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.833 [2024-11-20 15:55:40.077138] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:41.833 [2024-11-20 15:55:40.077184] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.833 [2024-11-20 15:55:40.077265] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:41.833 [2024-11-20 15:55:40.077282] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:42.091 [2024-11-20 15:55:40.204511] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:42.091 00:07:42.091 real 0m0.584s 00:07:42.091 user 0m0.317s 00:07:42.091 sys 0m0.174s 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:42.091 ************************************ 00:07:42.091 END TEST dd_unknown_flag 00:07:42.091 ************************************ 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:42.091 ************************************ 00:07:42.091 START TEST dd_invalid_json 00:07:42.091 ************************************ 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.091 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:42.350 [2024-11-20 15:55:40.381886] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:42.350 [2024-11-20 15:55:40.381974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62145 ] 00:07:42.350 [2024-11-20 15:55:40.525577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.350 [2024-11-20 15:55:40.581433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.350 [2024-11-20 15:55:40.581518] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:42.350 [2024-11-20 15:55:40.581546] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:42.350 [2024-11-20 15:55:40.581556] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.350 [2024-11-20 15:55:40.581593] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:42.608 00:07:42.608 real 0m0.320s 00:07:42.608 user 0m0.159s 00:07:42.608 sys 0m0.056s 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:42.608 ************************************ 00:07:42.608 END TEST dd_invalid_json 00:07:42.608 ************************************ 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:42.608 ************************************ 00:07:42.608 START TEST dd_invalid_seek 00:07:42.608 ************************************ 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.608 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:42.609 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:42.609 15:55:40 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:07:42.609 { 00:07:42.609 "subsystems": [ 00:07:42.609 { 00:07:42.609 "subsystem": "bdev", 00:07:42.609 "config": [ 00:07:42.609 { 00:07:42.609 "params": { 00:07:42.609 "block_size": 512, 00:07:42.609 "num_blocks": 512, 00:07:42.609 "name": "malloc0" 00:07:42.609 }, 00:07:42.609 "method": "bdev_malloc_create" 00:07:42.609 }, 00:07:42.609 { 00:07:42.609 "params": { 00:07:42.609 "block_size": 512, 00:07:42.609 "num_blocks": 512, 00:07:42.609 "name": "malloc1" 00:07:42.609 }, 00:07:42.609 "method": "bdev_malloc_create" 00:07:42.609 }, 00:07:42.609 { 00:07:42.609 "method": "bdev_wait_for_examine" 00:07:42.609 } 00:07:42.609 ] 00:07:42.609 } 00:07:42.609 ] 00:07:42.609 } 00:07:42.609 [2024-11-20 15:55:40.764458] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:42.609 [2024-11-20 15:55:40.764554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62169 ] 00:07:42.866 [2024-11-20 15:55:40.912374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.866 [2024-11-20 15:55:40.976073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.866 [2024-11-20 15:55:41.032470] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.866 [2024-11-20 15:55:41.096300] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:07:42.866 [2024-11-20 15:55:41.096408] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.125 [2024-11-20 15:55:41.217621] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:43.125 00:07:43.125 real 0m0.587s 00:07:43.125 user 0m0.371s 00:07:43.125 sys 0m0.173s 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:07:43.125 ************************************ 00:07:43.125 END TEST dd_invalid_seek 00:07:43.125 ************************************ 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.125 ************************************ 00:07:43.125 START TEST dd_invalid_skip 00:07:43.125 ************************************ 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.125 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.126 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.126 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.126 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.126 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:07:43.384 [2024-11-20 15:55:41.394458] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:43.384 [2024-11-20 15:55:41.394545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62209 ] 00:07:43.384 { 00:07:43.384 "subsystems": [ 00:07:43.384 { 00:07:43.384 "subsystem": "bdev", 00:07:43.384 "config": [ 00:07:43.384 { 00:07:43.384 "params": { 00:07:43.384 "block_size": 512, 00:07:43.384 "num_blocks": 512, 00:07:43.384 "name": "malloc0" 00:07:43.384 }, 00:07:43.384 "method": "bdev_malloc_create" 00:07:43.384 }, 00:07:43.384 { 00:07:43.384 "params": { 00:07:43.384 "block_size": 512, 00:07:43.384 "num_blocks": 512, 00:07:43.384 "name": "malloc1" 00:07:43.384 }, 00:07:43.384 "method": "bdev_malloc_create" 00:07:43.384 }, 00:07:43.384 { 00:07:43.384 "method": "bdev_wait_for_examine" 00:07:43.384 } 00:07:43.384 ] 00:07:43.384 } 00:07:43.384 ] 00:07:43.384 } 00:07:43.384 [2024-11-20 15:55:41.540935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.384 [2024-11-20 15:55:41.612477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.642 [2024-11-20 15:55:41.673998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.642 [2024-11-20 15:55:41.743186] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:07:43.642 [2024-11-20 15:55:41.743270] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.642 [2024-11-20 15:55:41.869534] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:43.901 00:07:43.901 real 0m0.598s 00:07:43.901 user 0m0.390s 00:07:43.901 sys 0m0.165s 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.901 ************************************ 00:07:43.901 END TEST dd_invalid_skip 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:07:43.901 ************************************ 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:43.901 ************************************ 00:07:43.901 START TEST dd_invalid_input_count 00:07:43.901 ************************************ 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:07:43.901 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:43.902 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:07:43.902 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.902 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:07:43.902 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:43.902 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.902 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.902 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.902 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.902 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.902 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.902 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.902 15:55:41 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:07:43.902 { 00:07:43.902 "subsystems": [ 00:07:43.902 { 00:07:43.902 "subsystem": "bdev", 00:07:43.902 "config": [ 00:07:43.902 { 00:07:43.902 "params": { 00:07:43.902 "block_size": 512, 00:07:43.902 "num_blocks": 512, 00:07:43.902 "name": "malloc0" 00:07:43.902 }, 00:07:43.902 "method": "bdev_malloc_create" 00:07:43.902 }, 00:07:43.902 { 00:07:43.902 "params": { 00:07:43.902 "block_size": 512, 00:07:43.902 "num_blocks": 512, 00:07:43.902 "name": "malloc1" 00:07:43.902 }, 00:07:43.902 "method": "bdev_malloc_create" 00:07:43.902 }, 00:07:43.902 { 00:07:43.902 "method": "bdev_wait_for_examine" 00:07:43.902 } 00:07:43.902 ] 00:07:43.902 } 00:07:43.902 ] 00:07:43.902 } 00:07:43.902 [2024-11-20 15:55:42.047128] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:43.902 [2024-11-20 15:55:42.047260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62246 ] 00:07:44.161 [2024-11-20 15:55:42.201032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.161 [2024-11-20 15:55:42.268555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.161 [2024-11-20 15:55:42.329742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.161 [2024-11-20 15:55:42.397663] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:07:44.161 [2024-11-20 15:55:42.397751] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.419 [2024-11-20 15:55:42.519711] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:44.419 00:07:44.419 real 0m0.596s 00:07:44.419 user 0m0.383s 00:07:44.419 sys 0m0.165s 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:07:44.419 ************************************ 00:07:44.419 END TEST dd_invalid_input_count 00:07:44.419 ************************************ 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:44.419 ************************************ 00:07:44.419 START TEST dd_invalid_output_count 00:07:44.419 ************************************ 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:44.419 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.420 15:55:42 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:07:44.678 { 00:07:44.678 "subsystems": [ 00:07:44.678 { 00:07:44.678 "subsystem": "bdev", 00:07:44.678 "config": [ 00:07:44.678 { 00:07:44.678 "params": { 00:07:44.678 "block_size": 512, 00:07:44.678 "num_blocks": 512, 00:07:44.678 "name": "malloc0" 00:07:44.678 }, 00:07:44.678 "method": "bdev_malloc_create" 00:07:44.678 }, 00:07:44.678 { 00:07:44.678 "method": "bdev_wait_for_examine" 00:07:44.678 } 00:07:44.678 ] 00:07:44.678 } 00:07:44.678 ] 00:07:44.678 } 00:07:44.678 [2024-11-20 15:55:42.698947] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:44.678 [2024-11-20 15:55:42.699054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62275 ] 00:07:44.678 [2024-11-20 15:55:42.849385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.678 [2024-11-20 15:55:42.913626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.938 [2024-11-20 15:55:42.971796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.938 [2024-11-20 15:55:43.031081] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:07:44.938 [2024-11-20 15:55:43.031148] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.938 [2024-11-20 15:55:43.153174] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:45.198 00:07:45.198 real 0m0.581s 00:07:45.198 user 0m0.375s 00:07:45.198 sys 0m0.158s 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:07:45.198 ************************************ 00:07:45.198 END TEST dd_invalid_output_count 00:07:45.198 ************************************ 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:45.198 ************************************ 00:07:45.198 START TEST dd_bs_not_multiple 00:07:45.198 ************************************ 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:07:45.198 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:45.199 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.199 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:07:45.199 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:07:45.199 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:45.199 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.199 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.199 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.199 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.199 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.199 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:45.199 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:45.199 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:07:45.199 [2024-11-20 15:55:43.334656] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:45.199 { 00:07:45.199 "subsystems": [ 00:07:45.199 { 00:07:45.199 "subsystem": "bdev", 00:07:45.199 "config": [ 00:07:45.199 { 00:07:45.199 "params": { 00:07:45.199 "block_size": 512, 00:07:45.199 "num_blocks": 512, 00:07:45.199 "name": "malloc0" 00:07:45.199 }, 00:07:45.199 "method": "bdev_malloc_create" 00:07:45.199 }, 00:07:45.199 { 00:07:45.199 "params": { 00:07:45.199 "block_size": 512, 00:07:45.199 "num_blocks": 512, 00:07:45.199 "name": "malloc1" 00:07:45.199 }, 00:07:45.199 "method": "bdev_malloc_create" 00:07:45.199 }, 00:07:45.199 { 00:07:45.199 "method": "bdev_wait_for_examine" 00:07:45.199 } 00:07:45.199 ] 00:07:45.199 } 00:07:45.199 ] 00:07:45.199 } 00:07:45.199 [2024-11-20 15:55:43.334755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62312 ] 00:07:45.502 [2024-11-20 15:55:43.483522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.502 [2024-11-20 15:55:43.541016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.502 [2024-11-20 15:55:43.599416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.502 [2024-11-20 15:55:43.665843] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:07:45.502 [2024-11-20 15:55:43.665923] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.760 [2024-11-20 15:55:43.792695] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:45.760 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:07:45.760 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:45.760 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:07:45.760 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:07:45.760 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:07:45.761 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:45.761 00:07:45.761 real 0m0.588s 00:07:45.761 user 0m0.367s 00:07:45.761 sys 0m0.179s 00:07:45.761 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.761 15:55:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:07:45.761 ************************************ 00:07:45.761 END TEST dd_bs_not_multiple 00:07:45.761 ************************************ 00:07:45.761 00:07:45.761 real 0m6.824s 00:07:45.761 user 0m3.558s 00:07:45.761 sys 0m2.638s 00:07:45.761 15:55:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.761 15:55:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:45.761 ************************************ 00:07:45.761 END TEST spdk_dd_negative 00:07:45.761 ************************************ 00:07:45.761 ************************************ 00:07:45.761 END TEST spdk_dd 00:07:45.761 ************************************ 00:07:45.761 00:07:45.761 real 1m21.353s 00:07:45.761 user 0m52.117s 00:07:45.761 sys 0m36.429s 00:07:45.761 15:55:43 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.761 15:55:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:45.761 15:55:43 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:45.761 15:55:43 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:45.761 15:55:43 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:45.761 15:55:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:45.761 15:55:43 -- common/autotest_common.sh@10 -- # set +x 00:07:46.019 15:55:44 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:46.019 15:55:44 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:46.019 15:55:44 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:46.019 15:55:44 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:46.019 15:55:44 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:46.019 15:55:44 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:46.019 15:55:44 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:46.019 15:55:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.019 15:55:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.019 15:55:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.019 ************************************ 00:07:46.019 START TEST nvmf_tcp 00:07:46.019 ************************************ 00:07:46.019 15:55:44 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:46.019 * Looking for test storage... 00:07:46.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:46.019 15:55:44 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.019 15:55:44 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.019 15:55:44 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.019 15:55:44 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.019 15:55:44 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:46.019 15:55:44 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.019 15:55:44 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.019 --rc genhtml_branch_coverage=1 00:07:46.019 --rc genhtml_function_coverage=1 00:07:46.019 --rc genhtml_legend=1 00:07:46.019 --rc geninfo_all_blocks=1 00:07:46.019 --rc geninfo_unexecuted_blocks=1 00:07:46.019 00:07:46.019 ' 00:07:46.019 15:55:44 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.019 --rc genhtml_branch_coverage=1 00:07:46.019 --rc genhtml_function_coverage=1 00:07:46.019 --rc genhtml_legend=1 00:07:46.019 --rc geninfo_all_blocks=1 00:07:46.019 --rc geninfo_unexecuted_blocks=1 00:07:46.019 00:07:46.019 ' 00:07:46.019 15:55:44 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.019 --rc genhtml_branch_coverage=1 00:07:46.019 --rc genhtml_function_coverage=1 00:07:46.019 --rc genhtml_legend=1 00:07:46.019 --rc geninfo_all_blocks=1 00:07:46.019 --rc geninfo_unexecuted_blocks=1 00:07:46.019 00:07:46.019 ' 00:07:46.019 15:55:44 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.019 --rc genhtml_branch_coverage=1 00:07:46.019 --rc genhtml_function_coverage=1 00:07:46.019 --rc genhtml_legend=1 00:07:46.019 --rc geninfo_all_blocks=1 00:07:46.019 --rc geninfo_unexecuted_blocks=1 00:07:46.019 00:07:46.019 ' 00:07:46.019 15:55:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:46.019 15:55:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:46.020 15:55:44 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:46.020 15:55:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.020 15:55:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.020 15:55:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.020 ************************************ 00:07:46.020 START TEST nvmf_target_core 00:07:46.020 ************************************ 00:07:46.020 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:46.278 * Looking for test storage... 00:07:46.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:46.278 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.278 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.278 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.278 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.278 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.279 --rc genhtml_branch_coverage=1 00:07:46.279 --rc genhtml_function_coverage=1 00:07:46.279 --rc genhtml_legend=1 00:07:46.279 --rc geninfo_all_blocks=1 00:07:46.279 --rc geninfo_unexecuted_blocks=1 00:07:46.279 00:07:46.279 ' 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.279 --rc genhtml_branch_coverage=1 00:07:46.279 --rc genhtml_function_coverage=1 00:07:46.279 --rc genhtml_legend=1 00:07:46.279 --rc geninfo_all_blocks=1 00:07:46.279 --rc geninfo_unexecuted_blocks=1 00:07:46.279 00:07:46.279 ' 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.279 --rc genhtml_branch_coverage=1 00:07:46.279 --rc genhtml_function_coverage=1 00:07:46.279 --rc genhtml_legend=1 00:07:46.279 --rc geninfo_all_blocks=1 00:07:46.279 --rc geninfo_unexecuted_blocks=1 00:07:46.279 00:07:46.279 ' 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.279 --rc genhtml_branch_coverage=1 00:07:46.279 --rc genhtml_function_coverage=1 00:07:46.279 --rc genhtml_legend=1 00:07:46.279 --rc geninfo_all_blocks=1 00:07:46.279 --rc geninfo_unexecuted_blocks=1 00:07:46.279 00:07:46.279 ' 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.279 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.279 ************************************ 00:07:46.279 START TEST nvmf_host_management 00:07:46.279 ************************************ 00:07:46.279 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:46.539 * Looking for test storage... 00:07:46.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.539 --rc genhtml_branch_coverage=1 00:07:46.539 --rc genhtml_function_coverage=1 00:07:46.539 --rc genhtml_legend=1 00:07:46.539 --rc geninfo_all_blocks=1 00:07:46.539 --rc geninfo_unexecuted_blocks=1 00:07:46.539 00:07:46.539 ' 00:07:46.539 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.539 --rc genhtml_branch_coverage=1 00:07:46.539 --rc genhtml_function_coverage=1 00:07:46.539 --rc genhtml_legend=1 00:07:46.539 --rc geninfo_all_blocks=1 00:07:46.539 --rc geninfo_unexecuted_blocks=1 00:07:46.539 00:07:46.539 ' 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.540 --rc genhtml_branch_coverage=1 00:07:46.540 --rc genhtml_function_coverage=1 00:07:46.540 --rc genhtml_legend=1 00:07:46.540 --rc geninfo_all_blocks=1 00:07:46.540 --rc geninfo_unexecuted_blocks=1 00:07:46.540 00:07:46.540 ' 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.540 --rc genhtml_branch_coverage=1 00:07:46.540 --rc genhtml_function_coverage=1 00:07:46.540 --rc genhtml_legend=1 00:07:46.540 --rc geninfo_all_blocks=1 00:07:46.540 --rc geninfo_unexecuted_blocks=1 00:07:46.540 00:07:46.540 ' 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.540 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:46.540 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:46.541 Cannot find device "nvmf_init_br" 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:46.541 Cannot find device "nvmf_init_br2" 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:46.541 Cannot find device "nvmf_tgt_br" 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:46.541 Cannot find device "nvmf_tgt_br2" 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:46.541 Cannot find device "nvmf_init_br" 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:46.541 Cannot find device "nvmf_init_br2" 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:46.541 Cannot find device "nvmf_tgt_br" 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:46.541 Cannot find device "nvmf_tgt_br2" 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:46.541 Cannot find device "nvmf_br" 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:46.541 Cannot find device "nvmf_init_if" 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:46.541 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:46.798 Cannot find device "nvmf_init_if2" 00:07:46.798 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:46.798 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:46.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.798 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:46.798 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:46.798 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.798 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:46.798 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:46.798 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:46.798 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:46.798 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:46.799 15:55:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:46.799 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:46.799 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:46.799 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:47.058 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:47.058 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:07:47.058 00:07:47.058 --- 10.0.0.3 ping statistics --- 00:07:47.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.058 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:47.058 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:47.058 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:07:47.058 00:07:47.058 --- 10.0.0.4 ping statistics --- 00:07:47.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.058 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:47.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:47.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:47.058 00:07:47.058 --- 10.0.0.1 ping statistics --- 00:07:47.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.058 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:47.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:47.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:07:47.058 00:07:47.058 --- 10.0.0.2 ping statistics --- 00:07:47.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:47.058 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62660 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62660 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62660 ']' 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.058 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.058 [2024-11-20 15:55:45.258989] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:47.059 [2024-11-20 15:55:45.259119] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.316 [2024-11-20 15:55:45.415457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.316 [2024-11-20 15:55:45.499338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.316 [2024-11-20 15:55:45.499424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.316 [2024-11-20 15:55:45.499444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.316 [2024-11-20 15:55:45.499461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.316 [2024-11-20 15:55:45.499474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.316 [2024-11-20 15:55:45.500771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.316 [2024-11-20 15:55:45.501041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:47.316 [2024-11-20 15:55:45.501057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.316 [2024-11-20 15:55:45.500876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.316 [2024-11-20 15:55:45.563606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.574 [2024-11-20 15:55:45.672653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.574 Malloc0 00:07:47.574 [2024-11-20 15:55:45.756382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62702 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62702 /var/tmp/bdevperf.sock 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 62702 ']' 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.574 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:47.575 { 00:07:47.575 "params": { 00:07:47.575 "name": "Nvme$subsystem", 00:07:47.575 "trtype": "$TEST_TRANSPORT", 00:07:47.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.575 "adrfam": "ipv4", 00:07:47.575 "trsvcid": "$NVMF_PORT", 00:07:47.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.575 "hdgst": ${hdgst:-false}, 00:07:47.575 "ddgst": ${ddgst:-false} 00:07:47.575 }, 00:07:47.575 "method": "bdev_nvme_attach_controller" 00:07:47.575 } 00:07:47.575 EOF 00:07:47.575 )") 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:47.575 15:55:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:47.575 "params": { 00:07:47.575 "name": "Nvme0", 00:07:47.575 "trtype": "tcp", 00:07:47.575 "traddr": "10.0.0.3", 00:07:47.575 "adrfam": "ipv4", 00:07:47.575 "trsvcid": "4420", 00:07:47.575 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:47.575 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:47.575 "hdgst": false, 00:07:47.575 "ddgst": false 00:07:47.575 }, 00:07:47.575 "method": "bdev_nvme_attach_controller" 00:07:47.575 }' 00:07:47.833 [2024-11-20 15:55:45.858909] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:47.833 [2024-11-20 15:55:45.859009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62702 ] 00:07:47.833 [2024-11-20 15:55:46.008738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.091 [2024-11-20 15:55:46.081219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.091 [2024-11-20 15:55:46.149761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.091 Running I/O for 10 seconds... 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.658 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.919 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=835 00:07:48.919 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:07:48.919 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:48.919 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:48.919 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:48.919 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:48.919 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.919 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.919 [2024-11-20 15:55:46.941273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.941939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb4e50 is same with the state(6) to be set 00:07:48.919 [2024-11-20 15:55:46.942042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.919 [2024-11-20 15:55:46.942072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.920 [2024-11-20 15:55:46.942985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.920 [2024-11-20 15:55:46.942996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.921 [2024-11-20 15:55:46.943502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.943523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2252130 is same with the state(6) to be set 00:07:48.921 [2024-11-20 15:55:46.944796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:48.921 task offset: 114688 on job bdev=Nvme0n1 fails 00:07:48.921 00:07:48.921 Latency(us) 00:07:48.921 [2024-11-20T15:55:47.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.921 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:48.921 Job: Nvme0n1 ended in about 0.66 seconds with error 00:07:48.921 Verification LBA range: start 0x0 length 0x400 00:07:48.921 Nvme0n1 : 0.66 1355.89 84.74 96.85 0.00 42737.14 3485.32 50045.67 00:07:48.921 [2024-11-20T15:55:47.171Z] =================================================================================================================== 00:07:48.921 [2024-11-20T15:55:47.171Z] Total : 1355.89 84.74 96.85 0.00 42737.14 3485.32 50045.67 00:07:48.921 [2024-11-20 15:55:46.947111] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.921 [2024-11-20 15:55:46.947142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2257ce0 (9): Bad file descriptor 00:07:48.921 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.921 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:48.921 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.921 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.921 [2024-11-20 15:55:46.956891] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:48.921 [2024-11-20 15:55:46.957006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:48.921 [2024-11-20 15:55:46.957032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.921 [2024-11-20 15:55:46.957047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:48.921 [2024-11-20 15:55:46.957058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:48.921 [2024-11-20 15:55:46.957067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:48.921 [2024-11-20 15:55:46.957076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x2257ce0 00:07:48.921 [2024-11-20 15:55:46.957111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2257ce0 (9): Bad file descriptor 00:07:48.921 [2024-11-20 15:55:46.957129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:48.921 [2024-11-20 15:55:46.957139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:48.921 [2024-11-20 15:55:46.957150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:48.921 [2024-11-20 15:55:46.957160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:48.921 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.921 15:55:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62702 00:07:49.857 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62702) - No such process 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:49.857 { 00:07:49.857 "params": { 00:07:49.857 "name": "Nvme$subsystem", 00:07:49.857 "trtype": "$TEST_TRANSPORT", 00:07:49.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.857 "adrfam": "ipv4", 00:07:49.857 "trsvcid": "$NVMF_PORT", 00:07:49.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.857 "hdgst": ${hdgst:-false}, 00:07:49.857 "ddgst": ${ddgst:-false} 00:07:49.857 }, 00:07:49.857 "method": "bdev_nvme_attach_controller" 00:07:49.857 } 00:07:49.857 EOF 00:07:49.857 )") 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:49.857 15:55:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:49.857 "params": { 00:07:49.857 "name": "Nvme0", 00:07:49.857 "trtype": "tcp", 00:07:49.857 "traddr": "10.0.0.3", 00:07:49.857 "adrfam": "ipv4", 00:07:49.857 "trsvcid": "4420", 00:07:49.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:49.857 "hdgst": false, 00:07:49.857 "ddgst": false 00:07:49.857 }, 00:07:49.857 "method": "bdev_nvme_attach_controller" 00:07:49.857 }' 00:07:49.857 [2024-11-20 15:55:48.024145] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:49.857 [2024-11-20 15:55:48.024234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62740 ] 00:07:50.115 [2024-11-20 15:55:48.166385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.115 [2024-11-20 15:55:48.228968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.115 [2024-11-20 15:55:48.292402] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.373 Running I/O for 1 seconds... 00:07:51.310 1472.00 IOPS, 92.00 MiB/s 00:07:51.310 Latency(us) 00:07:51.310 [2024-11-20T15:55:49.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.310 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:51.310 Verification LBA range: start 0x0 length 0x400 00:07:51.310 Nvme0n1 : 1.01 1522.54 95.16 0.00 0.00 41203.03 4140.68 38606.66 00:07:51.310 [2024-11-20T15:55:49.560Z] =================================================================================================================== 00:07:51.310 [2024-11-20T15:55:49.560Z] Total : 1522.54 95.16 0.00 0.00 41203.03 4140.68 38606.66 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:51.569 rmmod nvme_tcp 00:07:51.569 rmmod nvme_fabrics 00:07:51.569 rmmod nvme_keyring 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62660 ']' 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62660 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 62660 ']' 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 62660 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62660 00:07:51.569 killing process with pid 62660 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62660' 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 62660 00:07:51.569 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 62660 00:07:51.828 [2024-11-20 15:55:49.946142] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:51.828 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:51.828 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:51.828 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:51.828 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:51.828 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:51.828 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:51.828 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:51.828 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:51.828 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:51.828 15:55:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:51.828 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:51.828 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:51.828 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:51.828 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:51.828 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:51.828 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:51.828 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:51.828 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:52.086 00:07:52.086 real 0m5.781s 00:07:52.086 user 0m20.600s 00:07:52.086 sys 0m1.586s 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.086 ************************************ 00:07:52.086 END TEST nvmf_host_management 00:07:52.086 ************************************ 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.086 ************************************ 00:07:52.086 START TEST nvmf_lvol 00:07:52.086 ************************************ 00:07:52.086 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:52.345 * Looking for test storage... 00:07:52.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:52.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.345 --rc genhtml_branch_coverage=1 00:07:52.345 --rc genhtml_function_coverage=1 00:07:52.345 --rc genhtml_legend=1 00:07:52.345 --rc geninfo_all_blocks=1 00:07:52.345 --rc geninfo_unexecuted_blocks=1 00:07:52.345 00:07:52.345 ' 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:52.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.345 --rc genhtml_branch_coverage=1 00:07:52.345 --rc genhtml_function_coverage=1 00:07:52.345 --rc genhtml_legend=1 00:07:52.345 --rc geninfo_all_blocks=1 00:07:52.345 --rc geninfo_unexecuted_blocks=1 00:07:52.345 00:07:52.345 ' 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:52.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.345 --rc genhtml_branch_coverage=1 00:07:52.345 --rc genhtml_function_coverage=1 00:07:52.345 --rc genhtml_legend=1 00:07:52.345 --rc geninfo_all_blocks=1 00:07:52.345 --rc geninfo_unexecuted_blocks=1 00:07:52.345 00:07:52.345 ' 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:52.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:52.345 --rc genhtml_branch_coverage=1 00:07:52.345 --rc genhtml_function_coverage=1 00:07:52.345 --rc genhtml_legend=1 00:07:52.345 --rc geninfo_all_blocks=1 00:07:52.345 --rc geninfo_unexecuted_blocks=1 00:07:52.345 00:07:52.345 ' 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:52.345 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:52.345 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:52.346 Cannot find device "nvmf_init_br" 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:52.346 Cannot find device "nvmf_init_br2" 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:52.346 Cannot find device "nvmf_tgt_br" 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:52.346 Cannot find device "nvmf_tgt_br2" 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:52.346 Cannot find device "nvmf_init_br" 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:52.346 Cannot find device "nvmf_init_br2" 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:52.346 Cannot find device "nvmf_tgt_br" 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:52.346 Cannot find device "nvmf_tgt_br2" 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:52.346 Cannot find device "nvmf_br" 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:52.346 Cannot find device "nvmf_init_if" 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:52.346 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:52.605 Cannot find device "nvmf_init_if2" 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:52.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:52.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:52.605 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:52.605 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:07:52.605 00:07:52.605 --- 10.0.0.3 ping statistics --- 00:07:52.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.605 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:52.605 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:52.605 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:07:52.605 00:07:52.605 --- 10.0.0.4 ping statistics --- 00:07:52.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.605 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:52.605 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:52.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:07:52.863 00:07:52.863 --- 10.0.0.1 ping statistics --- 00:07:52.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.863 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:52.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:07:52.864 00:07:52.864 --- 10.0.0.2 ping statistics --- 00:07:52.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.864 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=63007 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 63007 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 63007 ']' 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.864 15:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:52.864 [2024-11-20 15:55:50.951235] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:07:52.864 [2024-11-20 15:55:50.951355] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.864 [2024-11-20 15:55:51.102363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.122 [2024-11-20 15:55:51.171556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.122 [2024-11-20 15:55:51.171649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.122 [2024-11-20 15:55:51.171672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.122 [2024-11-20 15:55:51.171687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.122 [2024-11-20 15:55:51.171700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.122 [2024-11-20 15:55:51.173023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.122 [2024-11-20 15:55:51.173133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.122 [2024-11-20 15:55:51.173141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.122 [2024-11-20 15:55:51.232861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.122 15:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.122 15:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:53.122 15:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.122 15:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.122 15:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:53.122 15:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.122 15:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:53.689 [2024-11-20 15:55:51.640787] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.689 15:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:53.948 15:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:53.948 15:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:54.206 15:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:54.206 15:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:54.465 15:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:55.033 15:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=576f40f4-defc-480a-a93a-77726cf50dcb 00:07:55.033 15:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 576f40f4-defc-480a-a93a-77726cf50dcb lvol 20 00:07:55.033 15:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d450d902-91a6-48f6-8a09-f62017b3a96a 00:07:55.033 15:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:55.291 15:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d450d902-91a6-48f6-8a09-f62017b3a96a 00:07:55.858 15:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:55.858 [2024-11-20 15:55:54.065015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:55.858 15:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:56.115 15:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=63081 00:07:56.115 15:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:56.116 15:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:57.492 15:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot d450d902-91a6-48f6-8a09-f62017b3a96a MY_SNAPSHOT 00:07:57.492 15:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e92670ef-a7df-4162-9a11-a411d5e30947 00:07:57.492 15:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize d450d902-91a6-48f6-8a09-f62017b3a96a 30 00:07:58.059 15:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone e92670ef-a7df-4162-9a11-a411d5e30947 MY_CLONE 00:07:58.318 15:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8d73f46a-530c-4526-9ff2-0e1ddd1d2dca 00:07:58.318 15:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 8d73f46a-530c-4526-9ff2-0e1ddd1d2dca 00:07:58.912 15:55:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 63081 00:08:07.112 Initializing NVMe Controllers 00:08:07.112 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:07.112 Controller IO queue size 128, less than required. 00:08:07.112 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:07.112 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:07.112 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:07.112 Initialization complete. Launching workers. 00:08:07.112 ======================================================== 00:08:07.112 Latency(us) 00:08:07.112 Device Information : IOPS MiB/s Average min max 00:08:07.112 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10232.20 39.97 12518.38 1507.15 53747.16 00:08:07.112 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9878.40 38.59 12968.36 3307.38 61643.31 00:08:07.112 ======================================================== 00:08:07.112 Total : 20110.60 78.56 12739.41 1507.15 61643.31 00:08:07.112 00:08:07.112 15:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:07.112 15:56:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d450d902-91a6-48f6-8a09-f62017b3a96a 00:08:07.112 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 576f40f4-defc-480a-a93a-77726cf50dcb 00:08:07.370 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:07.370 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:07.370 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:07.370 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:07.370 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:07.629 rmmod nvme_tcp 00:08:07.629 rmmod nvme_fabrics 00:08:07.629 rmmod nvme_keyring 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 63007 ']' 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 63007 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 63007 ']' 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 63007 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63007 00:08:07.629 killing process with pid 63007 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63007' 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 63007 00:08:07.629 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 63007 00:08:07.887 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.887 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:07.887 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:07.887 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:07.887 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:07.887 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:07.887 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:07.887 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:07.887 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:07.887 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:07.887 15:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:07.887 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:07.887 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:07.887 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:07.887 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:07.887 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:07.887 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:07.887 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:07.887 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:07.887 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:08.145 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:08.146 ************************************ 00:08:08.146 END TEST nvmf_lvol 00:08:08.146 ************************************ 00:08:08.146 00:08:08.146 real 0m15.931s 00:08:08.146 user 1m5.853s 00:08:08.146 sys 0m4.207s 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:08.146 ************************************ 00:08:08.146 START TEST nvmf_lvs_grow 00:08:08.146 ************************************ 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:08.146 * Looking for test storage... 00:08:08.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:08.146 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:08.404 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:08.404 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.404 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.404 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.404 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.404 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.404 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.404 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.404 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:08.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.405 --rc genhtml_branch_coverage=1 00:08:08.405 --rc genhtml_function_coverage=1 00:08:08.405 --rc genhtml_legend=1 00:08:08.405 --rc geninfo_all_blocks=1 00:08:08.405 --rc geninfo_unexecuted_blocks=1 00:08:08.405 00:08:08.405 ' 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:08.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.405 --rc genhtml_branch_coverage=1 00:08:08.405 --rc genhtml_function_coverage=1 00:08:08.405 --rc genhtml_legend=1 00:08:08.405 --rc geninfo_all_blocks=1 00:08:08.405 --rc geninfo_unexecuted_blocks=1 00:08:08.405 00:08:08.405 ' 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:08.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.405 --rc genhtml_branch_coverage=1 00:08:08.405 --rc genhtml_function_coverage=1 00:08:08.405 --rc genhtml_legend=1 00:08:08.405 --rc geninfo_all_blocks=1 00:08:08.405 --rc geninfo_unexecuted_blocks=1 00:08:08.405 00:08:08.405 ' 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:08.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.405 --rc genhtml_branch_coverage=1 00:08:08.405 --rc genhtml_function_coverage=1 00:08:08.405 --rc genhtml_legend=1 00:08:08.405 --rc geninfo_all_blocks=1 00:08:08.405 --rc geninfo_unexecuted_blocks=1 00:08:08.405 00:08:08.405 ' 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:08.405 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:08.405 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:08.406 Cannot find device "nvmf_init_br" 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:08.406 Cannot find device "nvmf_init_br2" 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:08.406 Cannot find device "nvmf_tgt_br" 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:08.406 Cannot find device "nvmf_tgt_br2" 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:08.406 Cannot find device "nvmf_init_br" 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:08.406 Cannot find device "nvmf_init_br2" 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:08.406 Cannot find device "nvmf_tgt_br" 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:08.406 Cannot find device "nvmf_tgt_br2" 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:08.406 Cannot find device "nvmf_br" 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:08.406 Cannot find device "nvmf_init_if" 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:08.406 Cannot find device "nvmf_init_if2" 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:08.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:08.406 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:08.406 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:08.665 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:08.665 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:08:08.665 00:08:08.665 --- 10.0.0.3 ping statistics --- 00:08:08.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.665 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:08.665 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:08.665 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:08:08.665 00:08:08.665 --- 10.0.0.4 ping statistics --- 00:08:08.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.665 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:08.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:08:08.665 00:08:08.665 --- 10.0.0.1 ping statistics --- 00:08:08.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.665 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:08.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:08:08.665 00:08:08.665 --- 10.0.0.2 ping statistics --- 00:08:08.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.665 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=63465 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 63465 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 63465 ']' 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.665 15:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.923 [2024-11-20 15:56:06.934037] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:08.923 [2024-11-20 15:56:06.934139] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.923 [2024-11-20 15:56:07.088268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.923 [2024-11-20 15:56:07.161117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.923 [2024-11-20 15:56:07.161187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.923 [2024-11-20 15:56:07.161201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.923 [2024-11-20 15:56:07.161212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.923 [2024-11-20 15:56:07.161221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.923 [2024-11-20 15:56:07.161706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.182 [2024-11-20 15:56:07.222564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.182 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.182 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:09.182 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:09.182 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:09.182 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.182 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.182 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:09.442 [2024-11-20 15:56:07.636008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:09.442 ************************************ 00:08:09.442 START TEST lvs_grow_clean 00:08:09.442 ************************************ 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:09.442 15:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.009 15:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:10.009 15:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:10.277 15:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fef071c0-59c8-4bc4-918f-d59841f118d1 00:08:10.277 15:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fef071c0-59c8-4bc4-918f-d59841f118d1 00:08:10.277 15:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:10.556 15:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:10.556 15:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:10.556 15:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fef071c0-59c8-4bc4-918f-d59841f118d1 lvol 150 00:08:11.125 15:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6ed4e116-602e-4521-88de-5595b10ae37a 00:08:11.125 15:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:11.125 15:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:11.125 [2024-11-20 15:56:09.357924] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:11.125 [2024-11-20 15:56:09.358029] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:11.125 true 00:08:11.383 15:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fef071c0-59c8-4bc4-918f-d59841f118d1 00:08:11.383 15:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:11.642 15:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:11.642 15:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:11.900 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6ed4e116-602e-4521-88de-5595b10ae37a 00:08:12.158 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:12.415 [2024-11-20 15:56:10.574550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:12.415 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:12.673 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63551 00:08:12.673 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:12.673 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:12.673 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63551 /var/tmp/bdevperf.sock 00:08:12.673 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 63551 ']' 00:08:12.673 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:12.673 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:12.673 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:12.673 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.673 15:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:12.931 [2024-11-20 15:56:10.972079] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:12.931 [2024-11-20 15:56:10.972183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63551 ] 00:08:12.931 [2024-11-20 15:56:11.128244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.188 [2024-11-20 15:56:11.206114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.188 [2024-11-20 15:56:11.265231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.188 15:56:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.188 15:56:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:13.188 15:56:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:13.757 Nvme0n1 00:08:13.757 15:56:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:14.015 [ 00:08:14.015 { 00:08:14.015 "name": "Nvme0n1", 00:08:14.015 "aliases": [ 00:08:14.015 "6ed4e116-602e-4521-88de-5595b10ae37a" 00:08:14.015 ], 00:08:14.015 "product_name": "NVMe disk", 00:08:14.015 "block_size": 4096, 00:08:14.016 "num_blocks": 38912, 00:08:14.016 "uuid": "6ed4e116-602e-4521-88de-5595b10ae37a", 00:08:14.016 "numa_id": -1, 00:08:14.016 "assigned_rate_limits": { 00:08:14.016 "rw_ios_per_sec": 0, 00:08:14.016 "rw_mbytes_per_sec": 0, 00:08:14.016 "r_mbytes_per_sec": 0, 00:08:14.016 "w_mbytes_per_sec": 0 00:08:14.016 }, 00:08:14.016 "claimed": false, 00:08:14.016 "zoned": false, 00:08:14.016 "supported_io_types": { 00:08:14.016 "read": true, 00:08:14.016 "write": true, 00:08:14.016 "unmap": true, 00:08:14.016 "flush": true, 00:08:14.016 "reset": true, 00:08:14.016 "nvme_admin": true, 00:08:14.016 "nvme_io": true, 00:08:14.016 "nvme_io_md": false, 00:08:14.016 "write_zeroes": true, 00:08:14.016 "zcopy": false, 00:08:14.016 "get_zone_info": false, 00:08:14.016 "zone_management": false, 00:08:14.016 "zone_append": false, 00:08:14.016 "compare": true, 00:08:14.016 "compare_and_write": true, 00:08:14.016 "abort": true, 00:08:14.016 "seek_hole": false, 00:08:14.016 "seek_data": false, 00:08:14.016 "copy": true, 00:08:14.016 "nvme_iov_md": false 00:08:14.016 }, 00:08:14.016 "memory_domains": [ 00:08:14.016 { 00:08:14.016 "dma_device_id": "system", 00:08:14.016 "dma_device_type": 1 00:08:14.016 } 00:08:14.016 ], 00:08:14.016 "driver_specific": { 00:08:14.016 "nvme": [ 00:08:14.016 { 00:08:14.016 "trid": { 00:08:14.016 "trtype": "TCP", 00:08:14.016 "adrfam": "IPv4", 00:08:14.016 "traddr": "10.0.0.3", 00:08:14.016 "trsvcid": "4420", 00:08:14.016 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:14.016 }, 00:08:14.016 "ctrlr_data": { 00:08:14.016 "cntlid": 1, 00:08:14.016 "vendor_id": "0x8086", 00:08:14.016 "model_number": "SPDK bdev Controller", 00:08:14.016 "serial_number": "SPDK0", 00:08:14.016 "firmware_revision": "25.01", 00:08:14.016 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:14.016 "oacs": { 00:08:14.016 "security": 0, 00:08:14.016 "format": 0, 00:08:14.016 "firmware": 0, 00:08:14.016 "ns_manage": 0 00:08:14.016 }, 00:08:14.016 "multi_ctrlr": true, 00:08:14.016 "ana_reporting": false 00:08:14.016 }, 00:08:14.016 "vs": { 00:08:14.016 "nvme_version": "1.3" 00:08:14.016 }, 00:08:14.016 "ns_data": { 00:08:14.016 "id": 1, 00:08:14.016 "can_share": true 00:08:14.016 } 00:08:14.016 } 00:08:14.016 ], 00:08:14.016 "mp_policy": "active_passive" 00:08:14.016 } 00:08:14.016 } 00:08:14.016 ] 00:08:14.016 15:56:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63567 00:08:14.016 15:56:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:14.016 15:56:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:14.016 Running I/O for 10 seconds... 00:08:14.977 Latency(us) 00:08:14.977 [2024-11-20T15:56:13.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.977 Nvme0n1 : 1.00 6482.00 25.32 0.00 0.00 0.00 0.00 0.00 00:08:14.977 [2024-11-20T15:56:13.227Z] =================================================================================================================== 00:08:14.977 [2024-11-20T15:56:13.227Z] Total : 6482.00 25.32 0.00 0.00 0.00 0.00 0.00 00:08:14.977 00:08:15.911 15:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fef071c0-59c8-4bc4-918f-d59841f118d1 00:08:16.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.169 Nvme0n1 : 2.00 6797.00 26.55 0.00 0.00 0.00 0.00 0.00 00:08:16.169 [2024-11-20T15:56:14.419Z] =================================================================================================================== 00:08:16.169 [2024-11-20T15:56:14.419Z] Total : 6797.00 26.55 0.00 0.00 0.00 0.00 0.00 00:08:16.169 00:08:16.169 true 00:08:16.169 15:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:16.169 15:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fef071c0-59c8-4bc4-918f-d59841f118d1 00:08:16.736 15:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:16.736 15:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:16.736 15:56:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63567 00:08:16.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.993 Nvme0n1 : 3.00 6944.33 27.13 0.00 0.00 0.00 0.00 0.00 00:08:16.993 [2024-11-20T15:56:15.243Z] =================================================================================================================== 00:08:16.993 [2024-11-20T15:56:15.243Z] Total : 6944.33 27.13 0.00 0.00 0.00 0.00 0.00 00:08:16.993 00:08:18.378 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.378 Nvme0n1 : 4.00 6986.25 27.29 0.00 0.00 0.00 0.00 0.00 00:08:18.378 [2024-11-20T15:56:16.628Z] =================================================================================================================== 00:08:18.378 [2024-11-20T15:56:16.628Z] Total : 6986.25 27.29 0.00 0.00 0.00 0.00 0.00 00:08:18.378 00:08:19.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.313 Nvme0n1 : 5.00 6986.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:19.313 [2024-11-20T15:56:17.563Z] =================================================================================================================== 00:08:19.313 [2024-11-20T15:56:17.563Z] Total : 6986.00 27.29 0.00 0.00 0.00 0.00 0.00 00:08:19.313 00:08:20.247 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.247 Nvme0n1 : 6.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:20.247 [2024-11-20T15:56:18.497Z] =================================================================================================================== 00:08:20.247 [2024-11-20T15:56:18.497Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:08:20.247 00:08:21.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.182 Nvme0n1 : 7.00 6876.14 26.86 0.00 0.00 0.00 0.00 0.00 00:08:21.182 [2024-11-20T15:56:19.432Z] =================================================================================================================== 00:08:21.182 [2024-11-20T15:56:19.432Z] Total : 6876.14 26.86 0.00 0.00 0.00 0.00 0.00 00:08:21.182 00:08:22.114 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.114 Nvme0n1 : 8.00 6842.12 26.73 0.00 0.00 0.00 0.00 0.00 00:08:22.114 [2024-11-20T15:56:20.364Z] =================================================================================================================== 00:08:22.114 [2024-11-20T15:56:20.364Z] Total : 6842.12 26.73 0.00 0.00 0.00 0.00 0.00 00:08:22.114 00:08:23.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.048 Nvme0n1 : 9.00 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:08:23.048 [2024-11-20T15:56:21.298Z] =================================================================================================================== 00:08:23.048 [2024-11-20T15:56:21.298Z] Total : 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:08:23.048 00:08:23.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.980 Nvme0n1 : 10.00 6819.90 26.64 0.00 0.00 0.00 0.00 0.00 00:08:23.980 [2024-11-20T15:56:22.230Z] =================================================================================================================== 00:08:23.980 [2024-11-20T15:56:22.230Z] Total : 6819.90 26.64 0.00 0.00 0.00 0.00 0.00 00:08:23.980 00:08:23.980 00:08:23.980 Latency(us) 00:08:23.980 [2024-11-20T15:56:22.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.980 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.980 Nvme0n1 : 10.01 6828.86 26.68 0.00 0.00 18739.62 7387.69 119632.99 00:08:23.980 [2024-11-20T15:56:22.230Z] =================================================================================================================== 00:08:23.980 [2024-11-20T15:56:22.230Z] Total : 6828.86 26.68 0.00 0.00 18739.62 7387.69 119632.99 00:08:23.980 { 00:08:23.980 "results": [ 00:08:23.980 { 00:08:23.980 "job": "Nvme0n1", 00:08:23.980 "core_mask": "0x2", 00:08:23.980 "workload": "randwrite", 00:08:23.980 "status": "finished", 00:08:23.980 "queue_depth": 128, 00:08:23.980 "io_size": 4096, 00:08:23.980 "runtime": 10.005628, 00:08:23.980 "iops": 6828.8567194383, 00:08:23.980 "mibps": 26.67522156030586, 00:08:23.980 "io_failed": 0, 00:08:23.980 "io_timeout": 0, 00:08:23.980 "avg_latency_us": 18739.617258504226, 00:08:23.980 "min_latency_us": 7387.694545454546, 00:08:23.980 "max_latency_us": 119632.98909090909 00:08:23.980 } 00:08:23.980 ], 00:08:23.980 "core_count": 1 00:08:23.980 } 00:08:24.238 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63551 00:08:24.238 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 63551 ']' 00:08:24.238 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 63551 00:08:24.238 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:24.238 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.238 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63551 00:08:24.238 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:24.238 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:24.238 killing process with pid 63551 00:08:24.238 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63551' 00:08:24.238 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 63551 00:08:24.238 Received shutdown signal, test time was about 10.000000 seconds 00:08:24.238 00:08:24.238 Latency(us) 00:08:24.238 [2024-11-20T15:56:22.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.238 [2024-11-20T15:56:22.488Z] =================================================================================================================== 00:08:24.238 [2024-11-20T15:56:22.488Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:24.238 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 63551 00:08:24.238 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:24.805 15:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:25.063 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fef071c0-59c8-4bc4-918f-d59841f118d1 00:08:25.063 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:25.321 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:25.321 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:25.321 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.579 [2024-11-20 15:56:23.689467] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:25.579 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fef071c0-59c8-4bc4-918f-d59841f118d1 00:08:25.579 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:25.579 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fef071c0-59c8-4bc4-918f-d59841f118d1 00:08:25.579 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.579 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.579 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.579 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.579 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.579 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:25.579 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:25.579 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:25.579 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fef071c0-59c8-4bc4-918f-d59841f118d1 00:08:25.837 request: 00:08:25.837 { 00:08:25.837 "uuid": "fef071c0-59c8-4bc4-918f-d59841f118d1", 00:08:25.837 "method": "bdev_lvol_get_lvstores", 00:08:25.837 "req_id": 1 00:08:25.837 } 00:08:25.837 Got JSON-RPC error response 00:08:25.837 response: 00:08:25.837 { 00:08:25.837 "code": -19, 00:08:25.838 "message": "No such device" 00:08:25.838 } 00:08:25.838 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:25.838 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:25.838 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:25.838 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:25.838 15:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.095 aio_bdev 00:08:26.095 15:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6ed4e116-602e-4521-88de-5595b10ae37a 00:08:26.095 15:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6ed4e116-602e-4521-88de-5595b10ae37a 00:08:26.095 15:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.095 15:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:26.095 15:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.095 15:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.095 15:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:26.660 15:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ed4e116-602e-4521-88de-5595b10ae37a -t 2000 00:08:26.917 [ 00:08:26.917 { 00:08:26.917 "name": "6ed4e116-602e-4521-88de-5595b10ae37a", 00:08:26.917 "aliases": [ 00:08:26.917 "lvs/lvol" 00:08:26.917 ], 00:08:26.917 "product_name": "Logical Volume", 00:08:26.917 "block_size": 4096, 00:08:26.917 "num_blocks": 38912, 00:08:26.917 "uuid": "6ed4e116-602e-4521-88de-5595b10ae37a", 00:08:26.917 "assigned_rate_limits": { 00:08:26.917 "rw_ios_per_sec": 0, 00:08:26.917 "rw_mbytes_per_sec": 0, 00:08:26.917 "r_mbytes_per_sec": 0, 00:08:26.917 "w_mbytes_per_sec": 0 00:08:26.917 }, 00:08:26.917 "claimed": false, 00:08:26.917 "zoned": false, 00:08:26.917 "supported_io_types": { 00:08:26.917 "read": true, 00:08:26.917 "write": true, 00:08:26.917 "unmap": true, 00:08:26.917 "flush": false, 00:08:26.917 "reset": true, 00:08:26.917 "nvme_admin": false, 00:08:26.917 "nvme_io": false, 00:08:26.917 "nvme_io_md": false, 00:08:26.917 "write_zeroes": true, 00:08:26.917 "zcopy": false, 00:08:26.917 "get_zone_info": false, 00:08:26.917 "zone_management": false, 00:08:26.917 "zone_append": false, 00:08:26.917 "compare": false, 00:08:26.917 "compare_and_write": false, 00:08:26.917 "abort": false, 00:08:26.917 "seek_hole": true, 00:08:26.917 "seek_data": true, 00:08:26.917 "copy": false, 00:08:26.917 "nvme_iov_md": false 00:08:26.917 }, 00:08:26.917 "driver_specific": { 00:08:26.917 "lvol": { 00:08:26.917 "lvol_store_uuid": "fef071c0-59c8-4bc4-918f-d59841f118d1", 00:08:26.917 "base_bdev": "aio_bdev", 00:08:26.917 "thin_provision": false, 00:08:26.917 "num_allocated_clusters": 38, 00:08:26.917 "snapshot": false, 00:08:26.917 "clone": false, 00:08:26.917 "esnap_clone": false 00:08:26.917 } 00:08:26.917 } 00:08:26.917 } 00:08:26.917 ] 00:08:26.917 15:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:26.917 15:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fef071c0-59c8-4bc4-918f-d59841f118d1 00:08:26.917 15:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:27.175 15:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:27.175 15:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fef071c0-59c8-4bc4-918f-d59841f118d1 00:08:27.175 15:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:27.433 15:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:27.433 15:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6ed4e116-602e-4521-88de-5595b10ae37a 00:08:27.692 15:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fef071c0-59c8-4bc4-918f-d59841f118d1 00:08:27.950 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.209 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.774 00:08:28.774 real 0m19.082s 00:08:28.774 user 0m18.063s 00:08:28.774 sys 0m2.608s 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:28.774 ************************************ 00:08:28.774 END TEST lvs_grow_clean 00:08:28.774 ************************************ 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.774 ************************************ 00:08:28.774 START TEST lvs_grow_dirty 00:08:28.774 ************************************ 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.774 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:28.775 15:56:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:29.032 15:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:29.032 15:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:29.290 15:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:29.290 15:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:29.290 15:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:29.547 15:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:29.547 15:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:29.547 15:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a lvol 150 00:08:29.805 15:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ee29a680-6a07-4b37-87da-882a32d25156 00:08:29.805 15:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:29.805 15:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:30.370 [2024-11-20 15:56:28.326544] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:30.370 [2024-11-20 15:56:28.326633] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:30.370 true 00:08:30.370 15:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:30.370 15:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:30.628 15:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:30.628 15:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:30.886 15:56:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ee29a680-6a07-4b37-87da-882a32d25156 00:08:31.143 15:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:31.401 [2024-11-20 15:56:29.475145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:31.401 15:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:31.660 15:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63823 00:08:31.660 15:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:31.660 15:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:31.660 15:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63823 /var/tmp/bdevperf.sock 00:08:31.660 15:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63823 ']' 00:08:31.660 15:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:31.660 15:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:31.660 15:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:31.660 15:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.660 15:56:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:31.660 [2024-11-20 15:56:29.817785] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:31.660 [2024-11-20 15:56:29.817898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63823 ] 00:08:31.918 [2024-11-20 15:56:29.968164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.918 [2024-11-20 15:56:30.031479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.918 [2024-11-20 15:56:30.086329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.919 15:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.919 15:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:31.919 15:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:32.485 Nvme0n1 00:08:32.485 15:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:32.745 [ 00:08:32.745 { 00:08:32.745 "name": "Nvme0n1", 00:08:32.745 "aliases": [ 00:08:32.745 "ee29a680-6a07-4b37-87da-882a32d25156" 00:08:32.745 ], 00:08:32.745 "product_name": "NVMe disk", 00:08:32.745 "block_size": 4096, 00:08:32.745 "num_blocks": 38912, 00:08:32.745 "uuid": "ee29a680-6a07-4b37-87da-882a32d25156", 00:08:32.745 "numa_id": -1, 00:08:32.745 "assigned_rate_limits": { 00:08:32.745 "rw_ios_per_sec": 0, 00:08:32.745 "rw_mbytes_per_sec": 0, 00:08:32.745 "r_mbytes_per_sec": 0, 00:08:32.745 "w_mbytes_per_sec": 0 00:08:32.745 }, 00:08:32.745 "claimed": false, 00:08:32.745 "zoned": false, 00:08:32.745 "supported_io_types": { 00:08:32.745 "read": true, 00:08:32.745 "write": true, 00:08:32.745 "unmap": true, 00:08:32.745 "flush": true, 00:08:32.745 "reset": true, 00:08:32.745 "nvme_admin": true, 00:08:32.745 "nvme_io": true, 00:08:32.745 "nvme_io_md": false, 00:08:32.745 "write_zeroes": true, 00:08:32.745 "zcopy": false, 00:08:32.745 "get_zone_info": false, 00:08:32.745 "zone_management": false, 00:08:32.745 "zone_append": false, 00:08:32.745 "compare": true, 00:08:32.745 "compare_and_write": true, 00:08:32.745 "abort": true, 00:08:32.745 "seek_hole": false, 00:08:32.745 "seek_data": false, 00:08:32.745 "copy": true, 00:08:32.745 "nvme_iov_md": false 00:08:32.745 }, 00:08:32.745 "memory_domains": [ 00:08:32.745 { 00:08:32.745 "dma_device_id": "system", 00:08:32.745 "dma_device_type": 1 00:08:32.745 } 00:08:32.745 ], 00:08:32.745 "driver_specific": { 00:08:32.745 "nvme": [ 00:08:32.745 { 00:08:32.745 "trid": { 00:08:32.745 "trtype": "TCP", 00:08:32.745 "adrfam": "IPv4", 00:08:32.745 "traddr": "10.0.0.3", 00:08:32.745 "trsvcid": "4420", 00:08:32.745 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:32.745 }, 00:08:32.745 "ctrlr_data": { 00:08:32.745 "cntlid": 1, 00:08:32.745 "vendor_id": "0x8086", 00:08:32.745 "model_number": "SPDK bdev Controller", 00:08:32.745 "serial_number": "SPDK0", 00:08:32.745 "firmware_revision": "25.01", 00:08:32.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:32.745 "oacs": { 00:08:32.745 "security": 0, 00:08:32.745 "format": 0, 00:08:32.745 "firmware": 0, 00:08:32.745 "ns_manage": 0 00:08:32.745 }, 00:08:32.745 "multi_ctrlr": true, 00:08:32.745 "ana_reporting": false 00:08:32.745 }, 00:08:32.745 "vs": { 00:08:32.745 "nvme_version": "1.3" 00:08:32.745 }, 00:08:32.745 "ns_data": { 00:08:32.745 "id": 1, 00:08:32.745 "can_share": true 00:08:32.745 } 00:08:32.745 } 00:08:32.745 ], 00:08:32.745 "mp_policy": "active_passive" 00:08:32.745 } 00:08:32.745 } 00:08:32.745 ] 00:08:32.745 15:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:32.745 15:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63836 00:08:32.745 15:56:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:32.745 Running I/O for 10 seconds... 00:08:33.705 Latency(us) 00:08:33.705 [2024-11-20T15:56:31.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.705 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.705 Nvme0n1 : 1.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:08:33.705 [2024-11-20T15:56:31.955Z] =================================================================================================================== 00:08:33.705 [2024-11-20T15:56:31.955Z] Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:08:33.705 00:08:34.638 15:56:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:34.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.897 Nvme0n1 : 2.00 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:08:34.897 [2024-11-20T15:56:33.147Z] =================================================================================================================== 00:08:34.897 [2024-11-20T15:56:33.147Z] Total : 7302.50 28.53 0.00 0.00 0.00 0.00 0.00 00:08:34.897 00:08:34.897 true 00:08:35.154 15:56:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:35.154 15:56:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:35.413 15:56:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:35.413 15:56:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:35.413 15:56:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63836 00:08:35.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.978 Nvme0n1 : 3.00 7281.33 28.44 0.00 0.00 0.00 0.00 0.00 00:08:35.978 [2024-11-20T15:56:34.228Z] =================================================================================================================== 00:08:35.978 [2024-11-20T15:56:34.228Z] Total : 7281.33 28.44 0.00 0.00 0.00 0.00 0.00 00:08:35.978 00:08:36.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.912 Nvme0n1 : 4.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:08:36.912 [2024-11-20T15:56:35.162Z] =================================================================================================================== 00:08:36.912 [2024-11-20T15:56:35.162Z] Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:08:36.912 00:08:37.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.845 Nvme0n1 : 5.00 7213.60 28.18 0.00 0.00 0.00 0.00 0.00 00:08:37.845 [2024-11-20T15:56:36.095Z] =================================================================================================================== 00:08:37.845 [2024-11-20T15:56:36.095Z] Total : 7213.60 28.18 0.00 0.00 0.00 0.00 0.00 00:08:37.845 00:08:38.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.781 Nvme0n1 : 6.00 7069.67 27.62 0.00 0.00 0.00 0.00 0.00 00:08:38.781 [2024-11-20T15:56:37.031Z] =================================================================================================================== 00:08:38.781 [2024-11-20T15:56:37.031Z] Total : 7069.67 27.62 0.00 0.00 0.00 0.00 0.00 00:08:38.781 00:08:39.714 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.714 Nvme0n1 : 7.00 6967.43 27.22 0.00 0.00 0.00 0.00 0.00 00:08:39.714 [2024-11-20T15:56:37.964Z] =================================================================================================================== 00:08:39.714 [2024-11-20T15:56:37.964Z] Total : 6967.43 27.22 0.00 0.00 0.00 0.00 0.00 00:08:39.714 00:08:41.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.086 Nvme0n1 : 8.00 6922.00 27.04 0.00 0.00 0.00 0.00 0.00 00:08:41.086 [2024-11-20T15:56:39.336Z] =================================================================================================================== 00:08:41.086 [2024-11-20T15:56:39.336Z] Total : 6922.00 27.04 0.00 0.00 0.00 0.00 0.00 00:08:41.086 00:08:42.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.019 Nvme0n1 : 9.00 6900.78 26.96 0.00 0.00 0.00 0.00 0.00 00:08:42.019 [2024-11-20T15:56:40.269Z] =================================================================================================================== 00:08:42.019 [2024-11-20T15:56:40.269Z] Total : 6900.78 26.96 0.00 0.00 0.00 0.00 0.00 00:08:42.019 00:08:42.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.955 Nvme0n1 : 10.00 6883.80 26.89 0.00 0.00 0.00 0.00 0.00 00:08:42.955 [2024-11-20T15:56:41.205Z] =================================================================================================================== 00:08:42.955 [2024-11-20T15:56:41.205Z] Total : 6883.80 26.89 0.00 0.00 0.00 0.00 0.00 00:08:42.955 00:08:42.955 00:08:42.955 Latency(us) 00:08:42.955 [2024-11-20T15:56:41.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.955 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.955 Nvme0n1 : 10.01 6892.41 26.92 0.00 0.00 18566.63 6255.71 116296.61 00:08:42.955 [2024-11-20T15:56:41.205Z] =================================================================================================================== 00:08:42.955 [2024-11-20T15:56:41.205Z] Total : 6892.41 26.92 0.00 0.00 18566.63 6255.71 116296.61 00:08:42.955 { 00:08:42.955 "results": [ 00:08:42.955 { 00:08:42.955 "job": "Nvme0n1", 00:08:42.955 "core_mask": "0x2", 00:08:42.955 "workload": "randwrite", 00:08:42.955 "status": "finished", 00:08:42.955 "queue_depth": 128, 00:08:42.955 "io_size": 4096, 00:08:42.955 "runtime": 10.00608, 00:08:42.955 "iops": 6892.409415075634, 00:08:42.955 "mibps": 26.923474277639194, 00:08:42.955 "io_failed": 0, 00:08:42.955 "io_timeout": 0, 00:08:42.955 "avg_latency_us": 18566.629707075685, 00:08:42.955 "min_latency_us": 6255.709090909091, 00:08:42.955 "max_latency_us": 116296.61090909092 00:08:42.955 } 00:08:42.955 ], 00:08:42.955 "core_count": 1 00:08:42.955 } 00:08:42.955 15:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63823 00:08:42.955 15:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 63823 ']' 00:08:42.956 15:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 63823 00:08:42.956 15:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:42.956 15:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.956 15:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63823 00:08:42.956 15:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:42.956 15:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:42.956 killing process with pid 63823 00:08:42.956 Received shutdown signal, test time was about 10.000000 seconds 00:08:42.956 00:08:42.956 Latency(us) 00:08:42.956 [2024-11-20T15:56:41.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.956 [2024-11-20T15:56:41.206Z] =================================================================================================================== 00:08:42.956 [2024-11-20T15:56:41.206Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:42.956 15:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63823' 00:08:42.956 15:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 63823 00:08:42.956 15:56:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 63823 00:08:42.956 15:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:43.523 15:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:43.781 15:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:43.781 15:56:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 63465 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 63465 00:08:44.039 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 63465 Killed "${NVMF_APP[@]}" "$@" 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63974 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63974 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 63974 ']' 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.039 15:56:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:44.039 [2024-11-20 15:56:42.131559] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:44.039 [2024-11-20 15:56:42.131647] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.040 [2024-11-20 15:56:42.277091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.298 [2024-11-20 15:56:42.354573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.298 [2024-11-20 15:56:42.354666] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.298 [2024-11-20 15:56:42.354687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.298 [2024-11-20 15:56:42.354700] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.298 [2024-11-20 15:56:42.354713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.298 [2024-11-20 15:56:42.355226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.298 [2024-11-20 15:56:42.416517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.865 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.865 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:44.865 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.865 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.865 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.124 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.124 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.382 [2024-11-20 15:56:43.383355] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:45.382 [2024-11-20 15:56:43.384937] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:45.382 [2024-11-20 15:56:43.385225] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:45.382 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:45.382 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ee29a680-6a07-4b37-87da-882a32d25156 00:08:45.382 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ee29a680-6a07-4b37-87da-882a32d25156 00:08:45.382 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.382 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:45.382 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.382 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.382 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.641 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee29a680-6a07-4b37-87da-882a32d25156 -t 2000 00:08:45.899 [ 00:08:45.899 { 00:08:45.899 "name": "ee29a680-6a07-4b37-87da-882a32d25156", 00:08:45.899 "aliases": [ 00:08:45.899 "lvs/lvol" 00:08:45.899 ], 00:08:45.899 "product_name": "Logical Volume", 00:08:45.899 "block_size": 4096, 00:08:45.899 "num_blocks": 38912, 00:08:45.899 "uuid": "ee29a680-6a07-4b37-87da-882a32d25156", 00:08:45.899 "assigned_rate_limits": { 00:08:45.899 "rw_ios_per_sec": 0, 00:08:45.899 "rw_mbytes_per_sec": 0, 00:08:45.900 "r_mbytes_per_sec": 0, 00:08:45.900 "w_mbytes_per_sec": 0 00:08:45.900 }, 00:08:45.900 "claimed": false, 00:08:45.900 "zoned": false, 00:08:45.900 "supported_io_types": { 00:08:45.900 "read": true, 00:08:45.900 "write": true, 00:08:45.900 "unmap": true, 00:08:45.900 "flush": false, 00:08:45.900 "reset": true, 00:08:45.900 "nvme_admin": false, 00:08:45.900 "nvme_io": false, 00:08:45.900 "nvme_io_md": false, 00:08:45.900 "write_zeroes": true, 00:08:45.900 "zcopy": false, 00:08:45.900 "get_zone_info": false, 00:08:45.900 "zone_management": false, 00:08:45.900 "zone_append": false, 00:08:45.900 "compare": false, 00:08:45.900 "compare_and_write": false, 00:08:45.900 "abort": false, 00:08:45.900 "seek_hole": true, 00:08:45.900 "seek_data": true, 00:08:45.900 "copy": false, 00:08:45.900 "nvme_iov_md": false 00:08:45.900 }, 00:08:45.900 "driver_specific": { 00:08:45.900 "lvol": { 00:08:45.900 "lvol_store_uuid": "3844e5bc-b633-4970-9a62-6ce9a8f49d5a", 00:08:45.900 "base_bdev": "aio_bdev", 00:08:45.900 "thin_provision": false, 00:08:45.900 "num_allocated_clusters": 38, 00:08:45.900 "snapshot": false, 00:08:45.900 "clone": false, 00:08:45.900 "esnap_clone": false 00:08:45.900 } 00:08:45.900 } 00:08:45.900 } 00:08:45.900 ] 00:08:45.900 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:45.900 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:45.900 15:56:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:46.158 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:46.158 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:46.158 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:46.417 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:46.417 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.675 [2024-11-20 15:56:44.836694] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:46.675 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:46.675 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:46.675 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:46.675 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.675 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.675 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.675 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.675 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.675 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.675 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.675 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:46.676 15:56:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:47.243 request: 00:08:47.243 { 00:08:47.243 "uuid": "3844e5bc-b633-4970-9a62-6ce9a8f49d5a", 00:08:47.243 "method": "bdev_lvol_get_lvstores", 00:08:47.243 "req_id": 1 00:08:47.243 } 00:08:47.243 Got JSON-RPC error response 00:08:47.243 response: 00:08:47.243 { 00:08:47.243 "code": -19, 00:08:47.243 "message": "No such device" 00:08:47.243 } 00:08:47.243 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:47.243 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:47.243 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:47.243 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:47.243 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.502 aio_bdev 00:08:47.502 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ee29a680-6a07-4b37-87da-882a32d25156 00:08:47.502 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ee29a680-6a07-4b37-87da-882a32d25156 00:08:47.502 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.502 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:47.502 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.502 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.502 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.761 15:56:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ee29a680-6a07-4b37-87da-882a32d25156 -t 2000 00:08:48.019 [ 00:08:48.019 { 00:08:48.019 "name": "ee29a680-6a07-4b37-87da-882a32d25156", 00:08:48.019 "aliases": [ 00:08:48.019 "lvs/lvol" 00:08:48.019 ], 00:08:48.019 "product_name": "Logical Volume", 00:08:48.019 "block_size": 4096, 00:08:48.019 "num_blocks": 38912, 00:08:48.019 "uuid": "ee29a680-6a07-4b37-87da-882a32d25156", 00:08:48.019 "assigned_rate_limits": { 00:08:48.019 "rw_ios_per_sec": 0, 00:08:48.019 "rw_mbytes_per_sec": 0, 00:08:48.019 "r_mbytes_per_sec": 0, 00:08:48.019 "w_mbytes_per_sec": 0 00:08:48.019 }, 00:08:48.019 "claimed": false, 00:08:48.019 "zoned": false, 00:08:48.019 "supported_io_types": { 00:08:48.019 "read": true, 00:08:48.019 "write": true, 00:08:48.019 "unmap": true, 00:08:48.019 "flush": false, 00:08:48.019 "reset": true, 00:08:48.019 "nvme_admin": false, 00:08:48.019 "nvme_io": false, 00:08:48.019 "nvme_io_md": false, 00:08:48.019 "write_zeroes": true, 00:08:48.019 "zcopy": false, 00:08:48.019 "get_zone_info": false, 00:08:48.019 "zone_management": false, 00:08:48.019 "zone_append": false, 00:08:48.019 "compare": false, 00:08:48.019 "compare_and_write": false, 00:08:48.019 "abort": false, 00:08:48.019 "seek_hole": true, 00:08:48.019 "seek_data": true, 00:08:48.019 "copy": false, 00:08:48.019 "nvme_iov_md": false 00:08:48.019 }, 00:08:48.019 "driver_specific": { 00:08:48.019 "lvol": { 00:08:48.019 "lvol_store_uuid": "3844e5bc-b633-4970-9a62-6ce9a8f49d5a", 00:08:48.019 "base_bdev": "aio_bdev", 00:08:48.019 "thin_provision": false, 00:08:48.019 "num_allocated_clusters": 38, 00:08:48.019 "snapshot": false, 00:08:48.019 "clone": false, 00:08:48.020 "esnap_clone": false 00:08:48.020 } 00:08:48.020 } 00:08:48.020 } 00:08:48.020 ] 00:08:48.020 15:56:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:48.020 15:56:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:48.020 15:56:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:48.278 15:56:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:48.278 15:56:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:48.278 15:56:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:48.568 15:56:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:48.568 15:56:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ee29a680-6a07-4b37-87da-882a32d25156 00:08:48.853 15:56:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3844e5bc-b633-4970-9a62-6ce9a8f49d5a 00:08:49.113 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:49.372 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:49.942 00:08:49.942 real 0m21.117s 00:08:49.942 user 0m44.011s 00:08:49.942 sys 0m7.824s 00:08:49.942 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.942 ************************************ 00:08:49.942 END TEST lvs_grow_dirty 00:08:49.942 ************************************ 00:08:49.942 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:49.942 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:49.942 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:49.942 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:49.942 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:49.942 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:49.942 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:49.942 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:49.942 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:49.942 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:49.942 nvmf_trace.0 00:08:49.942 15:56:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:49.942 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:49.942 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.942 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:49.942 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.942 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:49.943 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.943 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.943 rmmod nvme_tcp 00:08:49.943 rmmod nvme_fabrics 00:08:49.943 rmmod nvme_keyring 00:08:50.201 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.201 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:50.201 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:50.201 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63974 ']' 00:08:50.201 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63974 00:08:50.201 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 63974 ']' 00:08:50.201 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 63974 00:08:50.201 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63974 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63974' 00:08:50.202 killing process with pid 63974 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 63974 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 63974 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:50.202 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:50.461 00:08:50.461 real 0m42.414s 00:08:50.461 user 1m8.977s 00:08:50.461 sys 0m11.224s 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.461 ************************************ 00:08:50.461 END TEST nvmf_lvs_grow 00:08:50.461 ************************************ 00:08:50.461 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.721 ************************************ 00:08:50.721 START TEST nvmf_bdev_io_wait 00:08:50.721 ************************************ 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.721 * Looking for test storage... 00:08:50.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:50.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.721 --rc genhtml_branch_coverage=1 00:08:50.721 --rc genhtml_function_coverage=1 00:08:50.721 --rc genhtml_legend=1 00:08:50.721 --rc geninfo_all_blocks=1 00:08:50.721 --rc geninfo_unexecuted_blocks=1 00:08:50.721 00:08:50.721 ' 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:50.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.721 --rc genhtml_branch_coverage=1 00:08:50.721 --rc genhtml_function_coverage=1 00:08:50.721 --rc genhtml_legend=1 00:08:50.721 --rc geninfo_all_blocks=1 00:08:50.721 --rc geninfo_unexecuted_blocks=1 00:08:50.721 00:08:50.721 ' 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:50.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.721 --rc genhtml_branch_coverage=1 00:08:50.721 --rc genhtml_function_coverage=1 00:08:50.721 --rc genhtml_legend=1 00:08:50.721 --rc geninfo_all_blocks=1 00:08:50.721 --rc geninfo_unexecuted_blocks=1 00:08:50.721 00:08:50.721 ' 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:50.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.721 --rc genhtml_branch_coverage=1 00:08:50.721 --rc genhtml_function_coverage=1 00:08:50.721 --rc genhtml_legend=1 00:08:50.721 --rc geninfo_all_blocks=1 00:08:50.721 --rc geninfo_unexecuted_blocks=1 00:08:50.721 00:08:50.721 ' 00:08:50.721 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.722 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:50.722 Cannot find device "nvmf_init_br" 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:50.722 Cannot find device "nvmf_init_br2" 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:50.722 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:50.981 Cannot find device "nvmf_tgt_br" 00:08:50.981 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:50.981 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.981 Cannot find device "nvmf_tgt_br2" 00:08:50.981 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:50.981 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:50.981 Cannot find device "nvmf_init_br" 00:08:50.981 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:50.981 15:56:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:50.981 Cannot find device "nvmf_init_br2" 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:50.981 Cannot find device "nvmf_tgt_br" 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:50.981 Cannot find device "nvmf_tgt_br2" 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:50.981 Cannot find device "nvmf_br" 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:50.981 Cannot find device "nvmf_init_if" 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:50.981 Cannot find device "nvmf_init_if2" 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:50.981 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:50.982 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:51.240 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:51.240 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:08:51.240 00:08:51.240 --- 10.0.0.3 ping statistics --- 00:08:51.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.240 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:51.240 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:51.240 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:08:51.240 00:08:51.240 --- 10.0.0.4 ping statistics --- 00:08:51.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.240 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:51.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:51.240 00:08:51.240 --- 10.0.0.1 ping statistics --- 00:08:51.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.240 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:51.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:08:51.240 00:08:51.240 --- 10.0.0.2 ping statistics --- 00:08:51.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.240 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=64362 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 64362 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 64362 ']' 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.240 15:56:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:51.240 [2024-11-20 15:56:49.414503] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:51.240 [2024-11-20 15:56:49.415313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.499 [2024-11-20 15:56:49.564500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.499 [2024-11-20 15:56:49.632462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.499 [2024-11-20 15:56:49.632521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.499 [2024-11-20 15:56:49.632533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.499 [2024-11-20 15:56:49.632541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.499 [2024-11-20 15:56:49.632549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.499 [2024-11-20 15:56:49.633707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.499 [2024-11-20 15:56:49.633800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.499 [2024-11-20 15:56:49.633945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.499 [2024-11-20 15:56:49.633950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.435 [2024-11-20 15:56:50.538580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.435 [2024-11-20 15:56:50.551110] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.435 Malloc0 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.435 [2024-11-20 15:56:50.603174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=64397 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.435 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.435 { 00:08:52.435 "params": { 00:08:52.436 "name": "Nvme$subsystem", 00:08:52.436 "trtype": "$TEST_TRANSPORT", 00:08:52.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.436 "adrfam": "ipv4", 00:08:52.436 "trsvcid": "$NVMF_PORT", 00:08:52.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.436 "hdgst": ${hdgst:-false}, 00:08:52.436 "ddgst": ${ddgst:-false} 00:08:52.436 }, 00:08:52.436 "method": "bdev_nvme_attach_controller" 00:08:52.436 } 00:08:52.436 EOF 00:08:52.436 )") 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=64399 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.436 { 00:08:52.436 "params": { 00:08:52.436 "name": "Nvme$subsystem", 00:08:52.436 "trtype": "$TEST_TRANSPORT", 00:08:52.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.436 "adrfam": "ipv4", 00:08:52.436 "trsvcid": "$NVMF_PORT", 00:08:52.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.436 "hdgst": ${hdgst:-false}, 00:08:52.436 "ddgst": ${ddgst:-false} 00:08:52.436 }, 00:08:52.436 "method": "bdev_nvme_attach_controller" 00:08:52.436 } 00:08:52.436 EOF 00:08:52.436 )") 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=64402 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=64405 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.436 { 00:08:52.436 "params": { 00:08:52.436 "name": "Nvme$subsystem", 00:08:52.436 "trtype": "$TEST_TRANSPORT", 00:08:52.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.436 "adrfam": "ipv4", 00:08:52.436 "trsvcid": "$NVMF_PORT", 00:08:52.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.436 "hdgst": ${hdgst:-false}, 00:08:52.436 "ddgst": ${ddgst:-false} 00:08:52.436 }, 00:08:52.436 "method": "bdev_nvme_attach_controller" 00:08:52.436 } 00:08:52.436 EOF 00:08:52.436 )") 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.436 { 00:08:52.436 "params": { 00:08:52.436 "name": "Nvme$subsystem", 00:08:52.436 "trtype": "$TEST_TRANSPORT", 00:08:52.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.436 "adrfam": "ipv4", 00:08:52.436 "trsvcid": "$NVMF_PORT", 00:08:52.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.436 "hdgst": ${hdgst:-false}, 00:08:52.436 "ddgst": ${ddgst:-false} 00:08:52.436 }, 00:08:52.436 "method": "bdev_nvme_attach_controller" 00:08:52.436 } 00:08:52.436 EOF 00:08:52.436 )") 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.436 "params": { 00:08:52.436 "name": "Nvme1", 00:08:52.436 "trtype": "tcp", 00:08:52.436 "traddr": "10.0.0.3", 00:08:52.436 "adrfam": "ipv4", 00:08:52.436 "trsvcid": "4420", 00:08:52.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.436 "hdgst": false, 00:08:52.436 "ddgst": false 00:08:52.436 }, 00:08:52.436 "method": "bdev_nvme_attach_controller" 00:08:52.436 }' 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.436 "params": { 00:08:52.436 "name": "Nvme1", 00:08:52.436 "trtype": "tcp", 00:08:52.436 "traddr": "10.0.0.3", 00:08:52.436 "adrfam": "ipv4", 00:08:52.436 "trsvcid": "4420", 00:08:52.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.436 "hdgst": false, 00:08:52.436 "ddgst": false 00:08:52.436 }, 00:08:52.436 "method": "bdev_nvme_attach_controller" 00:08:52.436 }' 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.436 "params": { 00:08:52.436 "name": "Nvme1", 00:08:52.436 "trtype": "tcp", 00:08:52.436 "traddr": "10.0.0.3", 00:08:52.436 "adrfam": "ipv4", 00:08:52.436 "trsvcid": "4420", 00:08:52.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.436 "hdgst": false, 00:08:52.436 "ddgst": false 00:08:52.436 }, 00:08:52.436 "method": "bdev_nvme_attach_controller" 00:08:52.436 }' 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.436 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.436 "params": { 00:08:52.436 "name": "Nvme1", 00:08:52.436 "trtype": "tcp", 00:08:52.436 "traddr": "10.0.0.3", 00:08:52.436 "adrfam": "ipv4", 00:08:52.436 "trsvcid": "4420", 00:08:52.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.436 "hdgst": false, 00:08:52.436 "ddgst": false 00:08:52.436 }, 00:08:52.436 "method": "bdev_nvme_attach_controller" 00:08:52.436 }' 00:08:52.436 [2024-11-20 15:56:50.668624] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:52.436 [2024-11-20 15:56:50.669432] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:52.695 15:56:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 64397 00:08:52.695 [2024-11-20 15:56:50.693351] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:52.695 [2024-11-20 15:56:50.693486] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:52.695 [2024-11-20 15:56:50.704526] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:52.695 [2024-11-20 15:56:50.704642] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:52.695 [2024-11-20 15:56:50.714756] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:52.695 [2024-11-20 15:56:50.714893] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:52.953 [2024-11-20 15:56:50.956854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.953 [2024-11-20 15:56:51.018385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.953 [2024-11-20 15:56:51.020761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:52.953 [2024-11-20 15:56:51.034883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.953 [2024-11-20 15:56:51.041634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.953 [2024-11-20 15:56:51.085174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:52.953 [2024-11-20 15:56:51.110508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.953 [2024-11-20 15:56:51.111270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:52.953 [2024-11-20 15:56:51.125102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.953 [2024-11-20 15:56:51.125514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.953 Running I/O for 1 seconds... 00:08:52.953 [2024-11-20 15:56:51.183031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:52.953 [2024-11-20 15:56:51.197040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.211 Running I/O for 1 seconds... 00:08:53.211 Running I/O for 1 seconds... 00:08:53.211 Running I/O for 1 seconds... 00:08:54.146 9956.00 IOPS, 38.89 MiB/s 00:08:54.146 Latency(us) 00:08:54.146 [2024-11-20T15:56:52.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.146 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:54.146 Nvme1n1 : 1.01 10008.86 39.10 0.00 0.00 12730.61 4319.42 18350.08 00:08:54.146 [2024-11-20T15:56:52.396Z] =================================================================================================================== 00:08:54.146 [2024-11-20T15:56:52.396Z] Total : 10008.86 39.10 0.00 0.00 12730.61 4319.42 18350.08 00:08:54.146 162512.00 IOPS, 634.81 MiB/s 00:08:54.146 Latency(us) 00:08:54.146 [2024-11-20T15:56:52.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.146 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:54.146 Nvme1n1 : 1.00 162186.38 633.54 0.00 0.00 785.08 359.33 1966.08 00:08:54.146 [2024-11-20T15:56:52.396Z] =================================================================================================================== 00:08:54.146 [2024-11-20T15:56:52.396Z] Total : 162186.38 633.54 0.00 0.00 785.08 359.33 1966.08 00:08:54.147 7408.00 IOPS, 28.94 MiB/s 00:08:54.147 Latency(us) 00:08:54.147 [2024-11-20T15:56:52.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.147 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:54.147 Nvme1n1 : 1.01 7433.09 29.04 0.00 0.00 17094.70 9592.09 26095.24 00:08:54.147 [2024-11-20T15:56:52.397Z] =================================================================================================================== 00:08:54.147 [2024-11-20T15:56:52.397Z] Total : 7433.09 29.04 0.00 0.00 17094.70 9592.09 26095.24 00:08:54.147 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 64399 00:08:54.147 7366.00 IOPS, 28.77 MiB/s 00:08:54.147 Latency(us) 00:08:54.147 [2024-11-20T15:56:52.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.147 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:54.147 Nvme1n1 : 1.01 7459.17 29.14 0.00 0.00 17089.06 6613.18 28359.21 00:08:54.147 [2024-11-20T15:56:52.397Z] =================================================================================================================== 00:08:54.147 [2024-11-20T15:56:52.397Z] Total : 7459.17 29.14 0.00 0.00 17089.06 6613.18 28359.21 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 64402 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 64405 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.406 rmmod nvme_tcp 00:08:54.406 rmmod nvme_fabrics 00:08:54.406 rmmod nvme_keyring 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 64362 ']' 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 64362 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 64362 ']' 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 64362 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64362 00:08:54.406 killing process with pid 64362 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64362' 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 64362 00:08:54.406 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 64362 00:08:54.664 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.664 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.664 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.664 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:54.664 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:54.664 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.664 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.664 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.664 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:54.664 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:54.664 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:54.664 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:54.665 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:54.665 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:54.665 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:54.665 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:54.665 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:54.665 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:54.924 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:54.924 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:54.924 15:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:54.924 00:08:54.924 real 0m4.331s 00:08:54.924 user 0m17.476s 00:08:54.924 sys 0m2.401s 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.924 ************************************ 00:08:54.924 END TEST nvmf_bdev_io_wait 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.924 ************************************ 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:54.924 ************************************ 00:08:54.924 START TEST nvmf_queue_depth 00:08:54.924 ************************************ 00:08:54.924 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:55.184 * Looking for test storage... 00:08:55.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:55.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.184 --rc genhtml_branch_coverage=1 00:08:55.184 --rc genhtml_function_coverage=1 00:08:55.184 --rc genhtml_legend=1 00:08:55.184 --rc geninfo_all_blocks=1 00:08:55.184 --rc geninfo_unexecuted_blocks=1 00:08:55.184 00:08:55.184 ' 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:55.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.184 --rc genhtml_branch_coverage=1 00:08:55.184 --rc genhtml_function_coverage=1 00:08:55.184 --rc genhtml_legend=1 00:08:55.184 --rc geninfo_all_blocks=1 00:08:55.184 --rc geninfo_unexecuted_blocks=1 00:08:55.184 00:08:55.184 ' 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:55.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.184 --rc genhtml_branch_coverage=1 00:08:55.184 --rc genhtml_function_coverage=1 00:08:55.184 --rc genhtml_legend=1 00:08:55.184 --rc geninfo_all_blocks=1 00:08:55.184 --rc geninfo_unexecuted_blocks=1 00:08:55.184 00:08:55.184 ' 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:55.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.184 --rc genhtml_branch_coverage=1 00:08:55.184 --rc genhtml_function_coverage=1 00:08:55.184 --rc genhtml_legend=1 00:08:55.184 --rc geninfo_all_blocks=1 00:08:55.184 --rc geninfo_unexecuted_blocks=1 00:08:55.184 00:08:55.184 ' 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.184 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:55.185 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:55.185 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:55.186 Cannot find device "nvmf_init_br" 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:55.186 Cannot find device "nvmf_init_br2" 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:55.186 Cannot find device "nvmf_tgt_br" 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:55.186 Cannot find device "nvmf_tgt_br2" 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:55.186 Cannot find device "nvmf_init_br" 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:55.186 Cannot find device "nvmf_init_br2" 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:55.186 Cannot find device "nvmf_tgt_br" 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:55.186 Cannot find device "nvmf_tgt_br2" 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:55.186 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:55.186 Cannot find device "nvmf_br" 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:55.446 Cannot find device "nvmf_init_if" 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:55.446 Cannot find device "nvmf_init_if2" 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:55.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:55.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:55.446 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:55.446 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:08:55.446 00:08:55.446 --- 10.0.0.3 ping statistics --- 00:08:55.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.446 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:08:55.446 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:55.704 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:55.704 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:08:55.704 00:08:55.704 --- 10.0.0.4 ping statistics --- 00:08:55.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.704 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:55.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:08:55.704 00:08:55.704 --- 10.0.0.1 ping statistics --- 00:08:55.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.704 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:55.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:08:55.704 00:08:55.704 --- 10.0.0.2 ping statistics --- 00:08:55.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.704 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64698 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64698 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64698 ']' 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.704 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.705 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.705 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.705 15:56:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.705 [2024-11-20 15:56:53.804209] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:55.705 [2024-11-20 15:56:53.805243] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.963 [2024-11-20 15:56:53.957256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.963 [2024-11-20 15:56:54.022568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.963 [2024-11-20 15:56:54.022902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.963 [2024-11-20 15:56:54.023094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.963 [2024-11-20 15:56:54.023264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.963 [2024-11-20 15:56:54.023330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.963 [2024-11-20 15:56:54.023889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.963 [2024-11-20 15:56:54.080455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.900 [2024-11-20 15:56:54.875684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.900 Malloc0 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.900 [2024-11-20 15:56:54.927707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64730 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64730 /var/tmp/bdevperf.sock 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 64730 ']' 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.900 15:56:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:56.900 [2024-11-20 15:56:55.002994] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:08:56.901 [2024-11-20 15:56:55.003680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64730 ] 00:08:57.159 [2024-11-20 15:56:55.164108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.159 [2024-11-20 15:56:55.259554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.159 [2024-11-20 15:56:55.321676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.093 15:56:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.093 15:56:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:58.093 15:56:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:58.093 15:56:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.093 15:56:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.093 NVMe0n1 00:08:58.093 15:56:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.093 15:56:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:58.093 Running I/O for 10 seconds... 00:09:00.398 6164.00 IOPS, 24.08 MiB/s [2024-11-20T15:56:59.584Z] 6724.00 IOPS, 26.27 MiB/s [2024-11-20T15:57:00.517Z] 7170.67 IOPS, 28.01 MiB/s [2024-11-20T15:57:01.452Z] 7329.00 IOPS, 28.63 MiB/s [2024-11-20T15:57:02.387Z] 7471.00 IOPS, 29.18 MiB/s [2024-11-20T15:57:03.320Z] 7540.67 IOPS, 29.46 MiB/s [2024-11-20T15:57:04.697Z] 7650.86 IOPS, 29.89 MiB/s [2024-11-20T15:57:05.630Z] 7791.50 IOPS, 30.44 MiB/s [2024-11-20T15:57:06.565Z] 7870.44 IOPS, 30.74 MiB/s [2024-11-20T15:57:06.565Z] 7923.50 IOPS, 30.95 MiB/s 00:09:08.315 Latency(us) 00:09:08.315 [2024-11-20T15:57:06.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.315 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:08.315 Verification LBA range: start 0x0 length 0x4000 00:09:08.315 NVMe0n1 : 10.07 7968.55 31.13 0.00 0.00 127894.33 12034.79 97231.59 00:09:08.315 [2024-11-20T15:57:06.565Z] =================================================================================================================== 00:09:08.315 [2024-11-20T15:57:06.565Z] Total : 7968.55 31.13 0.00 0.00 127894.33 12034.79 97231.59 00:09:08.315 { 00:09:08.315 "results": [ 00:09:08.315 { 00:09:08.315 "job": "NVMe0n1", 00:09:08.315 "core_mask": "0x1", 00:09:08.315 "workload": "verify", 00:09:08.315 "status": "finished", 00:09:08.315 "verify_range": { 00:09:08.315 "start": 0, 00:09:08.315 "length": 16384 00:09:08.315 }, 00:09:08.315 "queue_depth": 1024, 00:09:08.315 "io_size": 4096, 00:09:08.315 "runtime": 10.071975, 00:09:08.315 "iops": 7968.546387376855, 00:09:08.315 "mibps": 31.12713432569084, 00:09:08.315 "io_failed": 0, 00:09:08.315 "io_timeout": 0, 00:09:08.315 "avg_latency_us": 127894.33068912124, 00:09:08.315 "min_latency_us": 12034.792727272727, 00:09:08.315 "max_latency_us": 97231.59272727273 00:09:08.315 } 00:09:08.315 ], 00:09:08.315 "core_count": 1 00:09:08.315 } 00:09:08.315 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64730 00:09:08.315 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64730 ']' 00:09:08.315 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64730 00:09:08.315 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:08.315 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.315 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64730 00:09:08.315 killing process with pid 64730 00:09:08.315 Received shutdown signal, test time was about 10.000000 seconds 00:09:08.315 00:09:08.315 Latency(us) 00:09:08.315 [2024-11-20T15:57:06.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.315 [2024-11-20T15:57:06.565Z] =================================================================================================================== 00:09:08.315 [2024-11-20T15:57:06.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:08.315 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.315 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.315 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64730' 00:09:08.315 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64730 00:09:08.315 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64730 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.574 rmmod nvme_tcp 00:09:08.574 rmmod nvme_fabrics 00:09:08.574 rmmod nvme_keyring 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64698 ']' 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64698 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 64698 ']' 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 64698 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64698 00:09:08.574 killing process with pid 64698 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64698' 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 64698 00:09:08.574 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 64698 00:09:08.832 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.832 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.832 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.832 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:08.832 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:08.832 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.832 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.832 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.832 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:08.832 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:08.832 15:57:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:08.832 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:08.832 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:08.832 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:08.832 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:08.832 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:08.832 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:08.832 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:09.091 ************************************ 00:09:09.091 END TEST nvmf_queue_depth 00:09:09.091 ************************************ 00:09:09.091 00:09:09.091 real 0m14.113s 00:09:09.091 user 0m24.248s 00:09:09.091 sys 0m2.208s 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.091 ************************************ 00:09:09.091 START TEST nvmf_target_multipath 00:09:09.091 ************************************ 00:09:09.091 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:09.091 * Looking for test storage... 00:09:09.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.350 --rc genhtml_branch_coverage=1 00:09:09.350 --rc genhtml_function_coverage=1 00:09:09.350 --rc genhtml_legend=1 00:09:09.350 --rc geninfo_all_blocks=1 00:09:09.350 --rc geninfo_unexecuted_blocks=1 00:09:09.350 00:09:09.350 ' 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.350 --rc genhtml_branch_coverage=1 00:09:09.350 --rc genhtml_function_coverage=1 00:09:09.350 --rc genhtml_legend=1 00:09:09.350 --rc geninfo_all_blocks=1 00:09:09.350 --rc geninfo_unexecuted_blocks=1 00:09:09.350 00:09:09.350 ' 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.350 --rc genhtml_branch_coverage=1 00:09:09.350 --rc genhtml_function_coverage=1 00:09:09.350 --rc genhtml_legend=1 00:09:09.350 --rc geninfo_all_blocks=1 00:09:09.350 --rc geninfo_unexecuted_blocks=1 00:09:09.350 00:09:09.350 ' 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.350 --rc genhtml_branch_coverage=1 00:09:09.350 --rc genhtml_function_coverage=1 00:09:09.350 --rc genhtml_legend=1 00:09:09.350 --rc geninfo_all_blocks=1 00:09:09.350 --rc geninfo_unexecuted_blocks=1 00:09:09.350 00:09:09.350 ' 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.350 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.351 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:09.351 Cannot find device "nvmf_init_br" 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:09.351 Cannot find device "nvmf_init_br2" 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:09.351 Cannot find device "nvmf_tgt_br" 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:09.351 Cannot find device "nvmf_tgt_br2" 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:09.351 Cannot find device "nvmf_init_br" 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:09.351 Cannot find device "nvmf_init_br2" 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:09.351 Cannot find device "nvmf_tgt_br" 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:09.351 Cannot find device "nvmf_tgt_br2" 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:09.351 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:09.352 Cannot find device "nvmf_br" 00:09:09.352 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:09.352 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:09.352 Cannot find device "nvmf_init_if" 00:09:09.352 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:09.352 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:09.352 Cannot find device "nvmf_init_if2" 00:09:09.352 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:09.352 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:09.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.352 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:09.352 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:09.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:09.352 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:09.352 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:09.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:09.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:09:09.610 00:09:09.610 --- 10.0.0.3 ping statistics --- 00:09:09.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.610 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:09:09.610 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:09.610 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:09.610 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:09:09.610 00:09:09.610 --- 10.0.0.4 ping statistics --- 00:09:09.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.611 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:09.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:09.611 00:09:09.611 --- 10.0.0.1 ping statistics --- 00:09:09.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.611 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:09.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:09:09.611 00:09:09.611 --- 10.0.0.2 ping statistics --- 00:09:09.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.611 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=65108 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 65108 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 65108 ']' 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.611 15:57:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.869 [2024-11-20 15:57:07.926753] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:09:09.869 [2024-11-20 15:57:07.926861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.869 [2024-11-20 15:57:08.085145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:10.133 [2024-11-20 15:57:08.159621] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.133 [2024-11-20 15:57:08.159693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.133 [2024-11-20 15:57:08.159706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.133 [2024-11-20 15:57:08.159717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.133 [2024-11-20 15:57:08.159726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.133 [2024-11-20 15:57:08.161118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.133 [2024-11-20 15:57:08.161198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:10.133 [2024-11-20 15:57:08.161259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:10.133 [2024-11-20 15:57:08.161263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.133 [2024-11-20 15:57:08.220246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.700 15:57:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.700 15:57:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:10.700 15:57:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:10.700 15:57:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.700 15:57:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:10.979 15:57:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.979 15:57:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:11.238 [2024-11-20 15:57:09.322986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.238 15:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:11.496 Malloc0 00:09:11.496 15:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:11.754 15:57:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.012 15:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:12.270 [2024-11-20 15:57:10.381420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:12.270 15:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:12.591 [2024-11-20 15:57:10.669713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:12.591 15:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid=ca768c1a-78f6-4242-8009-85e76e7a8123 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:12.591 15:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid=ca768c1a-78f6-4242-8009-85e76e7a8123 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:12.849 15:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:12.849 15:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:09:12.849 15:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:12.849 15:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:12.849 15:57:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=65203 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:14.796 15:57:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:14.796 [global] 00:09:14.796 thread=1 00:09:14.796 invalidate=1 00:09:14.796 rw=randrw 00:09:14.796 time_based=1 00:09:14.796 runtime=6 00:09:14.796 ioengine=libaio 00:09:14.796 direct=1 00:09:14.796 bs=4096 00:09:14.796 iodepth=128 00:09:14.796 norandommap=0 00:09:14.796 numjobs=1 00:09:14.796 00:09:14.796 verify_dump=1 00:09:14.796 verify_backlog=512 00:09:14.796 verify_state_save=0 00:09:14.796 do_verify=1 00:09:14.796 verify=crc32c-intel 00:09:14.796 [job0] 00:09:14.796 filename=/dev/nvme0n1 00:09:14.796 Could not set queue depth (nvme0n1) 00:09:15.054 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.054 fio-3.35 00:09:15.054 Starting 1 thread 00:09:15.988 15:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:16.246 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:16.504 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:16.504 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:16.504 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:16.504 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:16.504 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:16.504 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:16.505 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:16.505 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:16.505 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:16.505 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:16.505 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:16.505 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:16.505 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:16.762 15:57:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:17.020 15:57:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 65203 00:09:21.248 00:09:21.248 job0: (groupid=0, jobs=1): err= 0: pid=65224: Wed Nov 20 15:57:19 2024 00:09:21.248 read: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(244MiB/6006msec) 00:09:21.248 slat (usec): min=2, max=7311, avg=54.72, stdev=217.86 00:09:21.248 clat (usec): min=1329, max=15601, avg=8301.37, stdev=1389.89 00:09:21.248 lat (usec): min=1343, max=15616, avg=8356.09, stdev=1394.37 00:09:21.248 clat percentiles (usec): 00:09:21.248 | 1.00th=[ 4424], 5.00th=[ 6521], 10.00th=[ 7177], 20.00th=[ 7570], 00:09:21.248 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8160], 60.00th=[ 8291], 00:09:21.248 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11469], 00:09:21.248 | 99.00th=[12911], 99.50th=[13304], 99.90th=[14615], 99.95th=[15008], 00:09:21.248 | 99.99th=[15270] 00:09:21.248 bw ( KiB/s): min= 5296, max=28232, per=51.88%, avg=21560.00, stdev=7420.51, samples=11 00:09:21.248 iops : min= 1324, max= 7058, avg=5389.91, stdev=1855.09, samples=11 00:09:21.248 write: IOPS=6403, BW=25.0MiB/s (26.2MB/s)(131MiB/5234msec); 0 zone resets 00:09:21.248 slat (usec): min=3, max=4986, avg=66.76, stdev=156.07 00:09:21.248 clat (usec): min=919, max=14420, avg=7277.60, stdev=1244.74 00:09:21.248 lat (usec): min=962, max=14473, avg=7344.36, stdev=1248.79 00:09:21.248 clat percentiles (usec): 00:09:21.248 | 1.00th=[ 3490], 5.00th=[ 4424], 10.00th=[ 5997], 20.00th=[ 6783], 00:09:21.248 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7635], 00:09:21.248 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8586], 00:09:21.248 | 99.00th=[11207], 99.50th=[11994], 99.90th=[13435], 99.95th=[13698], 00:09:21.248 | 99.99th=[14353] 00:09:21.248 bw ( KiB/s): min= 5688, max=28240, per=84.54%, avg=21655.00, stdev=7198.80, samples=11 00:09:21.248 iops : min= 1422, max= 7060, avg=5413.64, stdev=1799.65, samples=11 00:09:21.248 lat (usec) : 1000=0.01% 00:09:21.248 lat (msec) : 2=0.04%, 4=1.26%, 10=93.34%, 20=5.36% 00:09:21.248 cpu : usr=5.98%, sys=23.23%, ctx=5580, majf=0, minf=108 00:09:21.248 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:21.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.248 issued rwts: total=62399,33516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.248 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.248 00:09:21.248 Run status group 0 (all jobs): 00:09:21.248 READ: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=244MiB (256MB), run=6006-6006msec 00:09:21.248 WRITE: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=131MiB (137MB), run=5234-5234msec 00:09:21.248 00:09:21.248 Disk stats (read/write): 00:09:21.248 nvme0n1: ios=61778/32646, merge=0/0, ticks=490552/221261, in_queue=711813, util=98.62% 00:09:21.248 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:21.506 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=65307 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:21.765 15:57:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:21.765 [global] 00:09:21.765 thread=1 00:09:21.765 invalidate=1 00:09:21.765 rw=randrw 00:09:21.765 time_based=1 00:09:21.765 runtime=6 00:09:21.765 ioengine=libaio 00:09:21.765 direct=1 00:09:21.765 bs=4096 00:09:21.765 iodepth=128 00:09:21.765 norandommap=0 00:09:21.765 numjobs=1 00:09:21.765 00:09:21.765 verify_dump=1 00:09:21.765 verify_backlog=512 00:09:21.765 verify_state_save=0 00:09:21.765 do_verify=1 00:09:21.765 verify=crc32c-intel 00:09:21.765 [job0] 00:09:21.765 filename=/dev/nvme0n1 00:09:21.765 Could not set queue depth (nvme0n1) 00:09:22.023 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:22.023 fio-3.35 00:09:22.023 Starting 1 thread 00:09:22.958 15:57:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:23.216 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:23.475 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:23.733 15:57:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:23.991 15:57:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 65307 00:09:28.198 00:09:28.198 job0: (groupid=0, jobs=1): err= 0: pid=65328: Wed Nov 20 15:57:26 2024 00:09:28.198 read: IOPS=11.3k, BW=44.2MiB/s (46.4MB/s)(266MiB/6007msec) 00:09:28.198 slat (usec): min=3, max=10415, avg=43.78, stdev=199.23 00:09:28.198 clat (usec): min=326, max=18508, avg=7659.71, stdev=2403.36 00:09:28.198 lat (usec): min=343, max=18521, avg=7703.49, stdev=2414.09 00:09:28.198 clat percentiles (usec): 00:09:28.198 | 1.00th=[ 1221], 5.00th=[ 2769], 10.00th=[ 4146], 20.00th=[ 6259], 00:09:28.198 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8225], 00:09:28.198 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9765], 95.00th=[11863], 00:09:28.198 | 99.00th=[13435], 99.50th=[14615], 99.90th=[16188], 99.95th=[16909], 00:09:28.198 | 99.99th=[17957] 00:09:28.198 bw ( KiB/s): min=14704, max=33564, per=53.82%, avg=24382.91, stdev=6724.51, samples=11 00:09:28.198 iops : min= 3676, max= 8391, avg=6095.73, stdev=1681.13, samples=11 00:09:28.198 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(143MiB/5510msec); 0 zone resets 00:09:28.198 slat (usec): min=6, max=3723, avg=57.51, stdev=138.05 00:09:28.198 clat (usec): min=197, max=17834, avg=6618.34, stdev=2006.54 00:09:28.198 lat (usec): min=248, max=17857, avg=6675.85, stdev=2015.05 00:09:28.198 clat percentiles (usec): 00:09:28.198 | 1.00th=[ 1106], 5.00th=[ 2606], 10.00th=[ 3621], 20.00th=[ 4817], 00:09:28.198 | 30.00th=[ 6390], 40.00th=[ 6915], 50.00th=[ 7242], 60.00th=[ 7439], 00:09:28.198 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8291], 95.00th=[ 8717], 00:09:28.198 | 99.00th=[11338], 99.50th=[12256], 99.90th=[14353], 99.95th=[14746], 00:09:28.198 | 99.99th=[15139] 00:09:28.198 bw ( KiB/s): min=15008, max=33317, per=91.91%, avg=24345.09, stdev=6543.50, samples=11 00:09:28.198 iops : min= 3752, max= 8329, avg=6086.18, stdev=1635.93, samples=11 00:09:28.198 lat (usec) : 250=0.01%, 500=0.04%, 750=0.14%, 1000=0.40% 00:09:28.198 lat (msec) : 2=2.55%, 4=7.48%, 10=82.31%, 20=7.07% 00:09:28.198 cpu : usr=6.23%, sys=23.78%, ctx=6542, majf=0, minf=78 00:09:28.198 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:28.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.198 issued rwts: total=68031,36488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.198 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.198 00:09:28.198 Run status group 0 (all jobs): 00:09:28.198 READ: bw=44.2MiB/s (46.4MB/s), 44.2MiB/s-44.2MiB/s (46.4MB/s-46.4MB/s), io=266MiB (279MB), run=6007-6007msec 00:09:28.198 WRITE: bw=25.9MiB/s (27.1MB/s), 25.9MiB/s-25.9MiB/s (27.1MB/s-27.1MB/s), io=143MiB (149MB), run=5510-5510msec 00:09:28.198 00:09:28.198 Disk stats (read/write): 00:09:28.198 nvme0n1: ios=67411/35610, merge=0/0, ticks=489263/217146, in_queue=706409, util=98.53% 00:09:28.198 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:28.198 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.198 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:28.198 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.198 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:28.198 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:28.198 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.198 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:28.198 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.456 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:28.456 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:28.456 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:28.456 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:28.456 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:28.456 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:28.714 rmmod nvme_tcp 00:09:28.714 rmmod nvme_fabrics 00:09:28.714 rmmod nvme_keyring 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 65108 ']' 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 65108 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 65108 ']' 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 65108 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65108 00:09:28.714 killing process with pid 65108 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65108' 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 65108 00:09:28.714 15:57:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 65108 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:28.973 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:29.231 ************************************ 00:09:29.231 END TEST nvmf_target_multipath 00:09:29.231 ************************************ 00:09:29.231 00:09:29.231 real 0m19.994s 00:09:29.231 user 1m14.667s 00:09:29.231 sys 0m9.740s 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:29.231 ************************************ 00:09:29.231 START TEST nvmf_zcopy 00:09:29.231 ************************************ 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:29.231 * Looking for test storage... 00:09:29.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:29.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.231 --rc genhtml_branch_coverage=1 00:09:29.231 --rc genhtml_function_coverage=1 00:09:29.231 --rc genhtml_legend=1 00:09:29.231 --rc geninfo_all_blocks=1 00:09:29.231 --rc geninfo_unexecuted_blocks=1 00:09:29.231 00:09:29.231 ' 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:29.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.231 --rc genhtml_branch_coverage=1 00:09:29.231 --rc genhtml_function_coverage=1 00:09:29.231 --rc genhtml_legend=1 00:09:29.231 --rc geninfo_all_blocks=1 00:09:29.231 --rc geninfo_unexecuted_blocks=1 00:09:29.231 00:09:29.231 ' 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:29.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.231 --rc genhtml_branch_coverage=1 00:09:29.231 --rc genhtml_function_coverage=1 00:09:29.231 --rc genhtml_legend=1 00:09:29.231 --rc geninfo_all_blocks=1 00:09:29.231 --rc geninfo_unexecuted_blocks=1 00:09:29.231 00:09:29.231 ' 00:09:29.231 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:29.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.232 --rc genhtml_branch_coverage=1 00:09:29.232 --rc genhtml_function_coverage=1 00:09:29.232 --rc genhtml_legend=1 00:09:29.232 --rc geninfo_all_blocks=1 00:09:29.232 --rc geninfo_unexecuted_blocks=1 00:09:29.232 00:09:29.232 ' 00:09:29.232 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.232 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:29.232 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.232 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.232 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.232 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.232 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.232 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.232 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.232 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.232 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.232 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:29.491 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:29.491 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:29.492 Cannot find device "nvmf_init_br" 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:29.492 Cannot find device "nvmf_init_br2" 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:29.492 Cannot find device "nvmf_tgt_br" 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:29.492 Cannot find device "nvmf_tgt_br2" 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:29.492 Cannot find device "nvmf_init_br" 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:29.492 Cannot find device "nvmf_init_br2" 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:29.492 Cannot find device "nvmf_tgt_br" 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:29.492 Cannot find device "nvmf_tgt_br2" 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:29.492 Cannot find device "nvmf_br" 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:29.492 Cannot find device "nvmf_init_if" 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:29.492 Cannot find device "nvmf_init_if2" 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:29.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:29.492 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:29.492 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:29.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:29.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:09:29.751 00:09:29.751 --- 10.0.0.3 ping statistics --- 00:09:29.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.751 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:29.751 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:29.751 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:29.751 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:09:29.751 00:09:29.751 --- 10.0.0.4 ping statistics --- 00:09:29.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.751 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:29.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:29.752 00:09:29.752 --- 10.0.0.1 ping statistics --- 00:09:29.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.752 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:29.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:09:29.752 00:09:29.752 --- 10.0.0.2 ping statistics --- 00:09:29.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.752 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65628 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65628 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 65628 ']' 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.752 15:57:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.752 [2024-11-20 15:57:27.957219] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:09:29.752 [2024-11-20 15:57:27.957310] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.010 [2024-11-20 15:57:28.103956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.010 [2024-11-20 15:57:28.171551] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.010 [2024-11-20 15:57:28.171619] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.010 [2024-11-20 15:57:28.171633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.010 [2024-11-20 15:57:28.171644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.010 [2024-11-20 15:57:28.171652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.010 [2024-11-20 15:57:28.172158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.010 [2024-11-20 15:57:28.228784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.270 [2024-11-20 15:57:28.347032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.270 [2024-11-20 15:57:28.363218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.270 malloc0 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:30.270 { 00:09:30.270 "params": { 00:09:30.270 "name": "Nvme$subsystem", 00:09:30.270 "trtype": "$TEST_TRANSPORT", 00:09:30.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.270 "adrfam": "ipv4", 00:09:30.270 "trsvcid": "$NVMF_PORT", 00:09:30.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.270 "hdgst": ${hdgst:-false}, 00:09:30.270 "ddgst": ${ddgst:-false} 00:09:30.270 }, 00:09:30.270 "method": "bdev_nvme_attach_controller" 00:09:30.270 } 00:09:30.270 EOF 00:09:30.270 )") 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:30.270 15:57:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:30.270 "params": { 00:09:30.270 "name": "Nvme1", 00:09:30.270 "trtype": "tcp", 00:09:30.270 "traddr": "10.0.0.3", 00:09:30.270 "adrfam": "ipv4", 00:09:30.270 "trsvcid": "4420", 00:09:30.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.270 "hdgst": false, 00:09:30.270 "ddgst": false 00:09:30.270 }, 00:09:30.270 "method": "bdev_nvme_attach_controller" 00:09:30.270 }' 00:09:30.270 [2024-11-20 15:57:28.463388] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:09:30.270 [2024-11-20 15:57:28.463497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65655 ] 00:09:30.529 [2024-11-20 15:57:28.617350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.529 [2024-11-20 15:57:28.687566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.529 [2024-11-20 15:57:28.752911] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:30.787 Running I/O for 10 seconds... 00:09:32.672 5866.00 IOPS, 45.83 MiB/s [2024-11-20T15:57:32.296Z] 5872.00 IOPS, 45.88 MiB/s [2024-11-20T15:57:33.254Z] 5871.00 IOPS, 45.87 MiB/s [2024-11-20T15:57:34.187Z] 5870.50 IOPS, 45.86 MiB/s [2024-11-20T15:57:35.118Z] 5863.20 IOPS, 45.81 MiB/s [2024-11-20T15:57:36.054Z] 5850.67 IOPS, 45.71 MiB/s [2024-11-20T15:57:36.988Z] 5836.29 IOPS, 45.60 MiB/s [2024-11-20T15:57:37.921Z] 5825.88 IOPS, 45.51 MiB/s [2024-11-20T15:57:39.303Z] 5830.56 IOPS, 45.55 MiB/s [2024-11-20T15:57:39.303Z] 5834.50 IOPS, 45.58 MiB/s 00:09:41.053 Latency(us) 00:09:41.053 [2024-11-20T15:57:39.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.053 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:41.053 Verification LBA range: start 0x0 length 0x1000 00:09:41.053 Nvme1n1 : 10.02 5835.94 45.59 0.00 0.00 21862.47 1511.80 30742.34 00:09:41.053 [2024-11-20T15:57:39.303Z] =================================================================================================================== 00:09:41.053 [2024-11-20T15:57:39.303Z] Total : 5835.94 45.59 0.00 0.00 21862.47 1511.80 30742.34 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65772 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.053 { 00:09:41.053 "params": { 00:09:41.053 "name": "Nvme$subsystem", 00:09:41.053 "trtype": "$TEST_TRANSPORT", 00:09:41.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.053 "adrfam": "ipv4", 00:09:41.053 "trsvcid": "$NVMF_PORT", 00:09:41.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.053 "hdgst": ${hdgst:-false}, 00:09:41.053 "ddgst": ${ddgst:-false} 00:09:41.053 }, 00:09:41.053 "method": "bdev_nvme_attach_controller" 00:09:41.053 } 00:09:41.053 EOF 00:09:41.053 )") 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:41.053 [2024-11-20 15:57:39.108789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.108849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:41.053 15:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.053 "params": { 00:09:41.053 "name": "Nvme1", 00:09:41.053 "trtype": "tcp", 00:09:41.053 "traddr": "10.0.0.3", 00:09:41.053 "adrfam": "ipv4", 00:09:41.053 "trsvcid": "4420", 00:09:41.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.053 "hdgst": false, 00:09:41.053 "ddgst": false 00:09:41.053 }, 00:09:41.053 "method": "bdev_nvme_attach_controller" 00:09:41.053 }' 00:09:41.053 [2024-11-20 15:57:39.120799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.120885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.132782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.132850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.144772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.144829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.144838] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:09:41.053 [2024-11-20 15:57:39.144912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65772 ] 00:09:41.053 [2024-11-20 15:57:39.156778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.156829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.168773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.168823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.180776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.180827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.192794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.192852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.204789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.204845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.216783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.216838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.228801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.228869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.240801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.240862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.252805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.252873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.264808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.264870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.272803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.272851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.280804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.280851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.053 [2024-11-20 15:57:39.283837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.053 [2024-11-20 15:57:39.292836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.053 [2024-11-20 15:57:39.292896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.304841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.304891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.316839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.316903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.328879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.328940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.340889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.340950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.349668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.311 [2024-11-20 15:57:39.352870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.352913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.364879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.364956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.376899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.376964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.388911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.388974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.400899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.400956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.408884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.408933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.412721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.311 [2024-11-20 15:57:39.416876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.416918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.428892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.428945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.440891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.440945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.452889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.452938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.465242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.465296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.477267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.477322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.489257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.489309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.501282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.501334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.513291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.513342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.525332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.525388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 [2024-11-20 15:57:39.537312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.537368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.311 Running I/O for 5 seconds... 00:09:41.311 [2024-11-20 15:57:39.555491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.311 [2024-11-20 15:57:39.555568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.569053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.569119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.586688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.586765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.601104] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.601171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.617271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.617343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.633706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.633783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.650404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.650474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.667994] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.668065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.683429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.683501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.701669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.701740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.716721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.716789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.732546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.732613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.750230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.750293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.766065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.766131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.784082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.784158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.798979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.799044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.569 [2024-11-20 15:57:39.808178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.569 [2024-11-20 15:57:39.808243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:39.824056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:39.824123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:39.834161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:39.834225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:39.849233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:39.849312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:39.866972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:39.867058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:39.883556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:39.883622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:39.901078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:39.901142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:39.916551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:39.916612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:39.927495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:39.927551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:39.939696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:39.939764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:39.955299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:39.955368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:39.972589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:39.972663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:39.989739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:39.989819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:40.006071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:40.006145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:40.025074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:40.025143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:40.040509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:40.040577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:40.050349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:40.050409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.827 [2024-11-20 15:57:40.066378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.827 [2024-11-20 15:57:40.066459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.081710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.081787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.098190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.098269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.115213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.115287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.131830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.131897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.149096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.149161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.164845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.164926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.174312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.174372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.190671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.190743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.205639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.205717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.221445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.221510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.239753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.239843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.255051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.255128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.271120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.271172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.287913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.287979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.303394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.303472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.313096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.313161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.090 [2024-11-20 15:57:40.329276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.090 [2024-11-20 15:57:40.329344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.345075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.345142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.354615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.354686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.369562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.369632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.384639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.384704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.394648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.394708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.410037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.410102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.423940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.424003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.439623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.439694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.449265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.449328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.465542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.465611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.481335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.481411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.493181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.493253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.511044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.511119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.526158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.526230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.542421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.542491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 11239.00 IOPS, 87.80 MiB/s [2024-11-20T15:57:40.604Z] [2024-11-20 15:57:40.560429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.560500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.575256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.575315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.354 [2024-11-20 15:57:40.592609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.354 [2024-11-20 15:57:40.592677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.612 [2024-11-20 15:57:40.607651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.607728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.617023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.617087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.633001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.633083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.649183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.649249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.665606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.665679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.683705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.683780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.698684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.698756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.715211] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.715280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.730532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.730606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.740242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.740308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.755029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.755347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.770584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.770953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.780595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.780666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.796379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.796458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.813118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.813201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.830724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.831019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.613 [2024-11-20 15:57:40.846045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.613 [2024-11-20 15:57:40.846116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:40.863983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:40.864060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:40.879254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:40.879331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:40.889409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:40.889481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:40.905124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:40.905195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:40.919779] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:40.919865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:40.936000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:40.936071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:40.953248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:40.953315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:40.970540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:40.970614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:40.985801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:40.985885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:41.001942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:41.002012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:41.020147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:41.020226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:41.035202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:41.035525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:41.050803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:41.050879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:41.066580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:41.066783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:41.084566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:41.084644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:41.099761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:41.099850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.871 [2024-11-20 15:57:41.109279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.871 [2024-11-20 15:57:41.109337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.125438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.125506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.142650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.143013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.159544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.159619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.175404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.175477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.193328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.193421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.208364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.208434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.225368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.225450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.240732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.240832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.250498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.250788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.266326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.266661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.284311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.284381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.299531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.299591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.311770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.311851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.329954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.330028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.344952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.345118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.130 [2024-11-20 15:57:41.362937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.130 [2024-11-20 15:57:41.363014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.387 [2024-11-20 15:57:41.378314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.387 [2024-11-20 15:57:41.378384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.387 [2024-11-20 15:57:41.396186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.387 [2024-11-20 15:57:41.396533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.387 [2024-11-20 15:57:41.411754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.387 [2024-11-20 15:57:41.412080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.387 [2024-11-20 15:57:41.421985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.387 [2024-11-20 15:57:41.422040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.387 [2024-11-20 15:57:41.437699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.387 [2024-11-20 15:57:41.437774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.387 [2024-11-20 15:57:41.455893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.387 [2024-11-20 15:57:41.455968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.387 [2024-11-20 15:57:41.470608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.387 [2024-11-20 15:57:41.470682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.387 [2024-11-20 15:57:41.485628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.387 [2024-11-20 15:57:41.485698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.387 [2024-11-20 15:57:41.501568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-11-20 15:57:41.501632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-11-20 15:57:41.519131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-11-20 15:57:41.519437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-11-20 15:57:41.534345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-11-20 15:57:41.534647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 11325.50 IOPS, 88.48 MiB/s [2024-11-20T15:57:41.638Z] [2024-11-20 15:57:41.550715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-11-20 15:57:41.550771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-11-20 15:57:41.568502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-11-20 15:57:41.568558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-11-20 15:57:41.584171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-11-20 15:57:41.584237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-11-20 15:57:41.600679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-11-20 15:57:41.600746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-11-20 15:57:41.618712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-11-20 15:57:41.618782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.388 [2024-11-20 15:57:41.633833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.388 [2024-11-20 15:57:41.633907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.649382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.649690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.667732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.667805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.682517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.682582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.698334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.698409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.714903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.714969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.731888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.731950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.748613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.748964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.764470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.764736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.774618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.774688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.790674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.790746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.809017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.809090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.824583] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.824651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.842084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.842157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.857736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.857805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.875255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.875522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.646 [2024-11-20 15:57:41.890681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.646 [2024-11-20 15:57:41.891002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:41.901091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:41.901156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:41.916613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:41.916677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:41.933985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:41.934059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:41.950365] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:41.950429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:41.966760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:41.966848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:41.984648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:41.984721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:41.999465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:41.999539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:42.015325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:42.015390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:42.033358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:42.033675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:42.049154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:42.049231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:42.065769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:42.065872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:42.081150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:42.081219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:42.090374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:42.090431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.907 [2024-11-20 15:57:42.107052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.907 [2024-11-20 15:57:42.107112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.908 [2024-11-20 15:57:42.123800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.908 [2024-11-20 15:57:42.123881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.908 [2024-11-20 15:57:42.140166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.908 [2024-11-20 15:57:42.140367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.156796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-11-20 15:57:42.156879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.174451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-11-20 15:57:42.174515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.190711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-11-20 15:57:42.190778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.207774] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-11-20 15:57:42.209176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.225432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-11-20 15:57:42.225497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.243224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-11-20 15:57:42.243295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.258415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-11-20 15:57:42.258490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.268435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-11-20 15:57:42.268678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.288292] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-11-20 15:57:42.288354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.304698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-11-20 15:57:42.304774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.322753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-11-20 15:57:42.322837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.337399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.166 [2024-11-20 15:57:42.337467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.166 [2024-11-20 15:57:42.353736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-11-20 15:57:42.353838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-11-20 15:57:42.370300] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-11-20 15:57:42.370376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-11-20 15:57:42.388970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-11-20 15:57:42.389050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.167 [2024-11-20 15:57:42.404082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.167 [2024-11-20 15:57:42.404159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.425 [2024-11-20 15:57:42.419631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.425 [2024-11-20 15:57:42.420005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.425 [2024-11-20 15:57:42.430233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.425 [2024-11-20 15:57:42.430304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.425 [2024-11-20 15:57:42.445869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.425 [2024-11-20 15:57:42.445952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.425 [2024-11-20 15:57:42.461465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.425 [2024-11-20 15:57:42.461539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.425 [2024-11-20 15:57:42.476950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.425 [2024-11-20 15:57:42.477031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.425 [2024-11-20 15:57:42.487106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.425 [2024-11-20 15:57:42.487177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.425 [2024-11-20 15:57:42.502880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.425 [2024-11-20 15:57:42.502955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.425 [2024-11-20 15:57:42.517289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-11-20 15:57:42.517380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-11-20 15:57:42.533425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-11-20 15:57:42.533507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 11303.00 IOPS, 88.30 MiB/s [2024-11-20T15:57:42.676Z] [2024-11-20 15:57:42.550741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-11-20 15:57:42.551096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-11-20 15:57:42.566371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-11-20 15:57:42.566663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-11-20 15:57:42.582880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-11-20 15:57:42.582958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-11-20 15:57:42.599454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-11-20 15:57:42.599533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-11-20 15:57:42.616068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-11-20 15:57:42.616147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-11-20 15:57:42.632708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-11-20 15:57:42.632786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-11-20 15:57:42.649040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-11-20 15:57:42.649390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.426 [2024-11-20 15:57:42.659542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.426 [2024-11-20 15:57:42.659619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.675021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.675120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.690767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.690864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.706331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.706402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.716404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.716469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.732703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.732774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.748957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.749026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.759404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.759711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.775075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.775142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.791142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.791206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.801009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.801079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.817194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.817261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.833801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.833879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.850525] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.850798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.867132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.867196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.884493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.884563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.900454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.900518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.684 [2024-11-20 15:57:42.917898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.684 [2024-11-20 15:57:42.917959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:42.933968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:42.934033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:42.950798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:42.950867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:42.967891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:42.967945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:42.984180] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:42.984234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.000171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.000235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.017677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.017742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.033879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.033944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.052601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.052672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.067620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.067938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.083271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.083564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.099150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.099427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.109068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.109325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.123956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.124201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.141332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.141657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.157913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.158216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.174415] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.174709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.943 [2024-11-20 15:57:43.189863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.943 [2024-11-20 15:57:43.190174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.201 [2024-11-20 15:57:43.207717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.201 [2024-11-20 15:57:43.208062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.201 [2024-11-20 15:57:43.222957] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.201 [2024-11-20 15:57:43.223220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.201 [2024-11-20 15:57:43.238694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.201 [2024-11-20 15:57:43.238925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.201 [2024-11-20 15:57:43.248771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.201 [2024-11-20 15:57:43.249000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.201 [2024-11-20 15:57:43.263861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.201 [2024-11-20 15:57:43.264078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.201 [2024-11-20 15:57:43.281140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.201 [2024-11-20 15:57:43.281384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.201 [2024-11-20 15:57:43.297172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.201 [2024-11-20 15:57:43.297239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.201 [2024-11-20 15:57:43.315036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.201 [2024-11-20 15:57:43.315113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.201 [2024-11-20 15:57:43.328736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.201 [2024-11-20 15:57:43.329060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.202 [2024-11-20 15:57:43.346517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.202 [2024-11-20 15:57:43.346574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.202 [2024-11-20 15:57:43.356990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.202 [2024-11-20 15:57:43.357051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.202 [2024-11-20 15:57:43.371611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.202 [2024-11-20 15:57:43.371681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.202 [2024-11-20 15:57:43.388447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.202 [2024-11-20 15:57:43.388678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.202 [2024-11-20 15:57:43.406019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.202 [2024-11-20 15:57:43.406090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.202 [2024-11-20 15:57:43.421040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.202 [2024-11-20 15:57:43.421113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.202 [2024-11-20 15:57:43.437624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.202 [2024-11-20 15:57:43.437693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.454215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.454284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.473308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.473382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.488439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.488509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.497749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.497829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.513960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.514034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.530142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.530216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.539931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.539992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 11309.50 IOPS, 88.36 MiB/s [2024-11-20T15:57:43.710Z] [2024-11-20 15:57:43.555487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.555824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.566009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.566289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.581202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.581508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.597463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.597528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.615696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.615765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.630996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.631058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.647312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.647372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.664097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.664314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.680512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.680575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.460 [2024-11-20 15:57:43.698070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.460 [2024-11-20 15:57:43.698131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.713388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.713461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.731344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.731614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.746675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.746964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.757047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.757111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.772418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.772482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.788996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.789067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.806269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.806584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.822938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.823008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.839629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.839699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.856936] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.857002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.872015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.872316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.889692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.889761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.904619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.904690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.920781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.920879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.938274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.938350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.718 [2024-11-20 15:57:43.954556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.718 [2024-11-20 15:57:43.954628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.976 [2024-11-20 15:57:43.970632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.976 [2024-11-20 15:57:43.970703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.976 [2024-11-20 15:57:43.980472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.976 [2024-11-20 15:57:43.980541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.976 [2024-11-20 15:57:43.997328] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.976 [2024-11-20 15:57:43.997402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.976 [2024-11-20 15:57:44.006775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.976 [2024-11-20 15:57:44.007109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.022013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.022088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.037946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.038026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.055704] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.055778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.070758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.070844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.080827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.080896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.096019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.096284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.111335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.111637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.126765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.127046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.142793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.142882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.152127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.152181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.167706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.167776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.183317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.183382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.201314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.201377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.977 [2024-11-20 15:57:44.216488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.977 [2024-11-20 15:57:44.216552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.234605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.234906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.249753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.249841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.265471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.265537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.281827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.281887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.298970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.299039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.315697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.315762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.332290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.332355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.348827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.348914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.365344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.365420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.375702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.375772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.390748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.391123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.406965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.407038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.425249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.425331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.439720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.439789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.457572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.457647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.472459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.472528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.235 [2024-11-20 15:57:44.482091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.235 [2024-11-20 15:57:44.482314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 [2024-11-20 15:57:44.497844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.497912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 [2024-11-20 15:57:44.515514] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.515584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 [2024-11-20 15:57:44.530351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.530419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 11334.20 IOPS, 88.55 MiB/s [2024-11-20T15:57:44.743Z] [2024-11-20 15:57:44.546025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.546090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 [2024-11-20 15:57:44.554182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.554239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 00:09:46.493 Latency(us) 00:09:46.493 [2024-11-20T15:57:44.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.493 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:46.493 Nvme1n1 : 5.01 11335.46 88.56 0.00 0.00 11277.60 4557.73 20494.89 00:09:46.493 [2024-11-20T15:57:44.743Z] =================================================================================================================== 00:09:46.493 [2024-11-20T15:57:44.743Z] Total : 11335.46 88.56 0.00 0.00 11277.60 4557.73 20494.89 00:09:46.493 [2024-11-20 15:57:44.566175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.566236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 [2024-11-20 15:57:44.578201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.578269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 [2024-11-20 15:57:44.590194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.590260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 [2024-11-20 15:57:44.602188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.602248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 [2024-11-20 15:57:44.614192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.614254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 [2024-11-20 15:57:44.626206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.626269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 [2024-11-20 15:57:44.638213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.638280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 [2024-11-20 15:57:44.650212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.650277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.493 [2024-11-20 15:57:44.662214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.493 [2024-11-20 15:57:44.662282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.494 [2024-11-20 15:57:44.674225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.494 [2024-11-20 15:57:44.674293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.494 [2024-11-20 15:57:44.686210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.494 [2024-11-20 15:57:44.686267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.494 [2024-11-20 15:57:44.698217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.494 [2024-11-20 15:57:44.698272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.494 [2024-11-20 15:57:44.710217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.494 [2024-11-20 15:57:44.710273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.494 [2024-11-20 15:57:44.722229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.494 [2024-11-20 15:57:44.722284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.494 [2024-11-20 15:57:44.734220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.494 [2024-11-20 15:57:44.734268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.752 [2024-11-20 15:57:44.746224] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.752 [2024-11-20 15:57:44.746273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.752 [2024-11-20 15:57:44.758261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.752 [2024-11-20 15:57:44.758318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.752 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65772) - No such process 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65772 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.752 delay0 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.752 15:57:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:46.752 [2024-11-20 15:57:44.962112] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:53.305 Initializing NVMe Controllers 00:09:53.305 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:53.305 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:53.305 Initialization complete. Launching workers. 00:09:53.305 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 204 00:09:53.305 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 491, failed to submit 33 00:09:53.305 success 361, unsuccessful 130, failed 0 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:53.305 rmmod nvme_tcp 00:09:53.305 rmmod nvme_fabrics 00:09:53.305 rmmod nvme_keyring 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65628 ']' 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65628 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 65628 ']' 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 65628 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65628 00:09:53.305 killing process with pid 65628 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:53.305 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65628' 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 65628 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 65628 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:53.306 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:53.563 00:09:53.563 real 0m24.300s 00:09:53.563 user 0m39.827s 00:09:53.563 sys 0m6.790s 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.563 ************************************ 00:09:53.563 END TEST nvmf_zcopy 00:09:53.563 ************************************ 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.563 ************************************ 00:09:53.563 START TEST nvmf_nmic 00:09:53.563 ************************************ 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:53.563 * Looking for test storage... 00:09:53.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.563 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:53.564 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.832 --rc genhtml_branch_coverage=1 00:09:53.832 --rc genhtml_function_coverage=1 00:09:53.832 --rc genhtml_legend=1 00:09:53.832 --rc geninfo_all_blocks=1 00:09:53.832 --rc geninfo_unexecuted_blocks=1 00:09:53.832 00:09:53.832 ' 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.832 --rc genhtml_branch_coverage=1 00:09:53.832 --rc genhtml_function_coverage=1 00:09:53.832 --rc genhtml_legend=1 00:09:53.832 --rc geninfo_all_blocks=1 00:09:53.832 --rc geninfo_unexecuted_blocks=1 00:09:53.832 00:09:53.832 ' 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.832 --rc genhtml_branch_coverage=1 00:09:53.832 --rc genhtml_function_coverage=1 00:09:53.832 --rc genhtml_legend=1 00:09:53.832 --rc geninfo_all_blocks=1 00:09:53.832 --rc geninfo_unexecuted_blocks=1 00:09:53.832 00:09:53.832 ' 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.832 --rc genhtml_branch_coverage=1 00:09:53.832 --rc genhtml_function_coverage=1 00:09:53.832 --rc genhtml_legend=1 00:09:53.832 --rc geninfo_all_blocks=1 00:09:53.832 --rc geninfo_unexecuted_blocks=1 00:09:53.832 00:09:53.832 ' 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:53.832 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.833 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:53.833 Cannot find device "nvmf_init_br" 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:53.833 Cannot find device "nvmf_init_br2" 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:53.833 Cannot find device "nvmf_tgt_br" 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.833 Cannot find device "nvmf_tgt_br2" 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:53.833 Cannot find device "nvmf_init_br" 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:53.833 Cannot find device "nvmf_init_br2" 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:53.833 Cannot find device "nvmf_tgt_br" 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:53.833 Cannot find device "nvmf_tgt_br2" 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:53.833 Cannot find device "nvmf_br" 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:53.833 Cannot find device "nvmf_init_if" 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:53.833 Cannot find device "nvmf_init_if2" 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.833 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.833 15:57:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:53.833 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.833 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.833 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.833 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:54.101 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:54.102 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:54.102 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:09:54.102 00:09:54.102 --- 10.0.0.3 ping statistics --- 00:09:54.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.102 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:54.102 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:54.102 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:09:54.102 00:09:54.102 --- 10.0.0.4 ping statistics --- 00:09:54.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.102 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:54.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:54.102 00:09:54.102 --- 10.0.0.1 ping statistics --- 00:09:54.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.102 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:54.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:09:54.102 00:09:54.102 --- 10.0.0.2 ping statistics --- 00:09:54.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.102 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=66147 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 66147 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 66147 ']' 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.102 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.102 [2024-11-20 15:57:52.325993] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:09:54.102 [2024-11-20 15:57:52.326114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.363 [2024-11-20 15:57:52.474460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.363 [2024-11-20 15:57:52.539332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.363 [2024-11-20 15:57:52.539386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.363 [2024-11-20 15:57:52.539398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.363 [2024-11-20 15:57:52.539406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.363 [2024-11-20 15:57:52.539413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.363 [2024-11-20 15:57:52.540463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.363 [2024-11-20 15:57:52.540561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.363 [2024-11-20 15:57:52.540654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.363 [2024-11-20 15:57:52.541088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.363 [2024-11-20 15:57:52.597236] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.621 [2024-11-20 15:57:52.705500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.621 Malloc0 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.621 [2024-11-20 15:57:52.771833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.621 test case1: single bdev can't be used in multiple subsystems 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.621 [2024-11-20 15:57:52.795658] bdev.c:8467:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:54.621 [2024-11-20 15:57:52.795703] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:54.621 [2024-11-20 15:57:52.795715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.621 request: 00:09:54.621 { 00:09:54.621 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:54.621 "namespace": { 00:09:54.621 "bdev_name": "Malloc0", 00:09:54.621 "no_auto_visible": false, 00:09:54.621 "hide_metadata": false 00:09:54.621 }, 00:09:54.621 "method": "nvmf_subsystem_add_ns", 00:09:54.621 "req_id": 1 00:09:54.621 } 00:09:54.621 Got JSON-RPC error response 00:09:54.621 response: 00:09:54.621 { 00:09:54.621 "code": -32602, 00:09:54.621 "message": "Invalid parameters" 00:09:54.621 } 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:54.621 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:54.621 Adding namespace failed - expected result. 00:09:54.622 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:54.622 test case2: host connect to nvmf target in multiple paths 00:09:54.622 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:54.622 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:54.622 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.622 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.622 [2024-11-20 15:57:52.807835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:54.622 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.622 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid=ca768c1a-78f6-4242-8009-85e76e7a8123 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:54.880 15:57:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid=ca768c1a-78f6-4242-8009-85e76e7a8123 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:54.880 15:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:54.880 15:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:54.880 15:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:54.880 15:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:54.880 15:57:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:57.410 15:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:57.410 15:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:57.410 15:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.410 15:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:57.410 15:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.410 15:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:57.410 15:57:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:57.410 [global] 00:09:57.410 thread=1 00:09:57.410 invalidate=1 00:09:57.410 rw=write 00:09:57.410 time_based=1 00:09:57.410 runtime=1 00:09:57.410 ioengine=libaio 00:09:57.410 direct=1 00:09:57.410 bs=4096 00:09:57.410 iodepth=1 00:09:57.410 norandommap=0 00:09:57.410 numjobs=1 00:09:57.410 00:09:57.410 verify_dump=1 00:09:57.410 verify_backlog=512 00:09:57.410 verify_state_save=0 00:09:57.410 do_verify=1 00:09:57.410 verify=crc32c-intel 00:09:57.410 [job0] 00:09:57.410 filename=/dev/nvme0n1 00:09:57.410 Could not set queue depth (nvme0n1) 00:09:57.410 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.410 fio-3.35 00:09:57.410 Starting 1 thread 00:09:58.342 00:09:58.342 job0: (groupid=0, jobs=1): err= 0: pid=66231: Wed Nov 20 15:57:56 2024 00:09:58.342 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:58.342 slat (nsec): min=11187, max=53418, avg=14033.17, stdev=4882.76 00:09:58.342 clat (usec): min=137, max=730, avg=174.56, stdev=19.20 00:09:58.342 lat (usec): min=149, max=742, avg=188.59, stdev=21.22 00:09:58.342 clat percentiles (usec): 00:09:58.342 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:09:58.342 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 178], 00:09:58.342 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 204], 00:09:58.342 | 99.00th=[ 227], 99.50th=[ 233], 99.90th=[ 251], 99.95th=[ 260], 00:09:58.342 | 99.99th=[ 734] 00:09:58.342 write: IOPS=3139, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:09:58.342 slat (usec): min=15, max=142, avg=21.76, stdev= 8.12 00:09:58.342 clat (usec): min=86, max=248, avg=108.64, stdev=13.23 00:09:58.342 lat (usec): min=104, max=391, avg=130.40, stdev=18.60 00:09:58.342 clat percentiles (usec): 00:09:58.342 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 99], 00:09:58.342 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 110], 00:09:58.342 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 127], 95.00th=[ 137], 00:09:58.342 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 172], 99.95th=[ 176], 00:09:58.342 | 99.99th=[ 249] 00:09:58.342 bw ( KiB/s): min=12288, max=12288, per=97.84%, avg=12288.00, stdev= 0.00, samples=1 00:09:58.342 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:58.342 lat (usec) : 100=12.53%, 250=87.40%, 500=0.05%, 750=0.02% 00:09:58.342 cpu : usr=2.40%, sys=8.90%, ctx=6215, majf=0, minf=5 00:09:58.342 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.342 issued rwts: total=3072,3143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.342 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.342 00:09:58.342 Run status group 0 (all jobs): 00:09:58.342 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:58.342 WRITE: bw=12.3MiB/s (12.9MB/s), 12.3MiB/s-12.3MiB/s (12.9MB/s-12.9MB/s), io=12.3MiB (12.9MB), run=1001-1001msec 00:09:58.342 00:09:58.342 Disk stats (read/write): 00:09:58.342 nvme0n1: ios=2623/3072, merge=0/0, ticks=467/346, in_queue=813, util=91.28% 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:58.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.342 rmmod nvme_tcp 00:09:58.342 rmmod nvme_fabrics 00:09:58.342 rmmod nvme_keyring 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 66147 ']' 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 66147 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 66147 ']' 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 66147 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:58.342 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.600 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66147 00:09:58.600 killing process with pid 66147 00:09:58.600 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.600 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.600 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66147' 00:09:58.600 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 66147 00:09:58.600 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 66147 00:09:58.860 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.860 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.860 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.860 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:58.860 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:58.861 15:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:58.861 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.861 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.861 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:58.861 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.861 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.861 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.861 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:58.861 00:09:58.861 real 0m5.443s 00:09:58.861 user 0m15.859s 00:09:58.861 sys 0m2.365s 00:09:58.861 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.861 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.861 ************************************ 00:09:58.861 END TEST nvmf_nmic 00:09:58.861 ************************************ 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:59.119 ************************************ 00:09:59.119 START TEST nvmf_fio_target 00:09:59.119 ************************************ 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:59.119 * Looking for test storage... 00:09:59.119 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.119 --rc genhtml_branch_coverage=1 00:09:59.119 --rc genhtml_function_coverage=1 00:09:59.119 --rc genhtml_legend=1 00:09:59.119 --rc geninfo_all_blocks=1 00:09:59.119 --rc geninfo_unexecuted_blocks=1 00:09:59.119 00:09:59.119 ' 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.119 --rc genhtml_branch_coverage=1 00:09:59.119 --rc genhtml_function_coverage=1 00:09:59.119 --rc genhtml_legend=1 00:09:59.119 --rc geninfo_all_blocks=1 00:09:59.119 --rc geninfo_unexecuted_blocks=1 00:09:59.119 00:09:59.119 ' 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.119 --rc genhtml_branch_coverage=1 00:09:59.119 --rc genhtml_function_coverage=1 00:09:59.119 --rc genhtml_legend=1 00:09:59.119 --rc geninfo_all_blocks=1 00:09:59.119 --rc geninfo_unexecuted_blocks=1 00:09:59.119 00:09:59.119 ' 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.119 --rc genhtml_branch_coverage=1 00:09:59.119 --rc genhtml_function_coverage=1 00:09:59.119 --rc genhtml_legend=1 00:09:59.119 --rc geninfo_all_blocks=1 00:09:59.119 --rc geninfo_unexecuted_blocks=1 00:09:59.119 00:09:59.119 ' 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.119 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.377 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:59.377 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:59.378 Cannot find device "nvmf_init_br" 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:59.378 Cannot find device "nvmf_init_br2" 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:59.378 Cannot find device "nvmf_tgt_br" 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.378 Cannot find device "nvmf_tgt_br2" 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:59.378 Cannot find device "nvmf_init_br" 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:59.378 Cannot find device "nvmf_init_br2" 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:59.378 Cannot find device "nvmf_tgt_br" 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:59.378 Cannot find device "nvmf_tgt_br2" 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:59.378 Cannot find device "nvmf_br" 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:59.378 Cannot find device "nvmf_init_if" 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:59.378 Cannot find device "nvmf_init_if2" 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:59.378 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:59.635 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:59.635 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:59.635 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:09:59.636 00:09:59.636 --- 10.0.0.3 ping statistics --- 00:09:59.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.636 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:59.636 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:59.636 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:09:59.636 00:09:59.636 --- 10.0.0.4 ping statistics --- 00:09:59.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.636 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:59.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:59.636 00:09:59.636 --- 10.0.0.1 ping statistics --- 00:09:59.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.636 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:59.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:09:59.636 00:09:59.636 --- 10.0.0.2 ping statistics --- 00:09:59.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.636 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=66466 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 66466 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 66466 ']' 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.636 15:57:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.636 [2024-11-20 15:57:57.880036] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:09:59.636 [2024-11-20 15:57:57.880685] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.893 [2024-11-20 15:57:58.032716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.894 [2024-11-20 15:57:58.102766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.894 [2024-11-20 15:57:58.102850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.894 [2024-11-20 15:57:58.102865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.894 [2024-11-20 15:57:58.102875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.894 [2024-11-20 15:57:58.102884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.894 [2024-11-20 15:57:58.104199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.894 [2024-11-20 15:57:58.104284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.894 [2024-11-20 15:57:58.104357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.894 [2024-11-20 15:57:58.104362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.168 [2024-11-20 15:57:58.162132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:00.733 15:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.733 15:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:00.733 15:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:00.733 15:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.733 15:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.733 15:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.733 15:57:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:00.991 [2024-11-20 15:57:59.159011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.991 15:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.555 15:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:01.555 15:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.813 15:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:01.814 15:57:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.071 15:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:02.071 15:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.329 15:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:02.329 15:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:02.587 15:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.845 15:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:02.845 15:58:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.103 15:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:03.103 15:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.361 15:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:03.361 15:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:03.618 15:58:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:03.876 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:03.876 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.443 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:04.443 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:04.701 15:58:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:04.959 [2024-11-20 15:58:03.029675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:04.959 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:05.217 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:05.474 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid=ca768c1a-78f6-4242-8009-85e76e7a8123 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:05.748 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:05.748 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:05.748 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:05.748 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:05.748 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:05.748 15:58:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:07.659 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:07.659 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:07.659 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:07.659 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:07.659 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:07.659 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:07.660 15:58:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:07.660 [global] 00:10:07.660 thread=1 00:10:07.660 invalidate=1 00:10:07.660 rw=write 00:10:07.660 time_based=1 00:10:07.660 runtime=1 00:10:07.660 ioengine=libaio 00:10:07.660 direct=1 00:10:07.660 bs=4096 00:10:07.660 iodepth=1 00:10:07.660 norandommap=0 00:10:07.660 numjobs=1 00:10:07.660 00:10:07.660 verify_dump=1 00:10:07.660 verify_backlog=512 00:10:07.660 verify_state_save=0 00:10:07.660 do_verify=1 00:10:07.660 verify=crc32c-intel 00:10:07.660 [job0] 00:10:07.660 filename=/dev/nvme0n1 00:10:07.660 [job1] 00:10:07.660 filename=/dev/nvme0n2 00:10:07.660 [job2] 00:10:07.660 filename=/dev/nvme0n3 00:10:07.660 [job3] 00:10:07.660 filename=/dev/nvme0n4 00:10:07.660 Could not set queue depth (nvme0n1) 00:10:07.660 Could not set queue depth (nvme0n2) 00:10:07.660 Could not set queue depth (nvme0n3) 00:10:07.660 Could not set queue depth (nvme0n4) 00:10:07.918 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.918 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.918 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.918 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.918 fio-3.35 00:10:07.918 Starting 4 threads 00:10:09.292 00:10:09.292 job0: (groupid=0, jobs=1): err= 0: pid=66656: Wed Nov 20 15:58:07 2024 00:10:09.292 read: IOPS=1908, BW=7632KiB/s (7816kB/s)(7640KiB/1001msec) 00:10:09.292 slat (nsec): min=11218, max=56076, avg=15727.36, stdev=6134.03 00:10:09.292 clat (usec): min=146, max=492, avg=266.41, stdev=30.22 00:10:09.292 lat (usec): min=157, max=511, avg=282.14, stdev=30.81 00:10:09.292 clat percentiles (usec): 00:10:09.292 | 1.00th=[ 174], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 251], 00:10:09.292 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:10:09.292 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 302], 00:10:09.292 | 99.00th=[ 388], 99.50th=[ 445], 99.90th=[ 486], 99.95th=[ 494], 00:10:09.292 | 99.99th=[ 494] 00:10:09.292 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:09.292 slat (usec): min=16, max=100, avg=23.77, stdev= 9.16 00:10:09.292 clat (usec): min=106, max=362, avg=197.70, stdev=26.36 00:10:09.292 lat (usec): min=134, max=412, avg=221.47, stdev=30.90 00:10:09.292 clat percentiles (usec): 00:10:09.292 | 1.00th=[ 129], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:10:09.292 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:10:09.292 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 219], 95.00th=[ 229], 00:10:09.292 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 355], 99.95th=[ 359], 00:10:09.292 | 99.99th=[ 363] 00:10:09.292 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:09.292 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:09.292 lat (usec) : 250=59.58%, 500=40.42% 00:10:09.292 cpu : usr=2.00%, sys=6.00%, ctx=3958, majf=0, minf=5 00:10:09.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.292 issued rwts: total=1910,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.292 job1: (groupid=0, jobs=1): err= 0: pid=66657: Wed Nov 20 15:58:07 2024 00:10:09.292 read: IOPS=1905, BW=7620KiB/s (7803kB/s)(7628KiB/1001msec) 00:10:09.292 slat (usec): min=11, max=151, avg=17.64, stdev= 7.78 00:10:09.292 clat (usec): min=175, max=2017, avg=269.11, stdev=65.55 00:10:09.292 lat (usec): min=187, max=2031, avg=286.75, stdev=65.62 00:10:09.292 clat percentiles (usec): 00:10:09.292 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 243], 20.00th=[ 249], 00:10:09.292 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:10:09.292 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 310], 00:10:09.292 | 99.00th=[ 396], 99.50th=[ 490], 99.90th=[ 1926], 99.95th=[ 2024], 00:10:09.292 | 99.99th=[ 2024] 00:10:09.292 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:09.292 slat (usec): min=16, max=109, avg=27.25, stdev=10.98 00:10:09.292 clat (usec): min=102, max=324, avg=189.83, stdev=20.81 00:10:09.292 lat (usec): min=123, max=433, avg=217.07, stdev=22.41 00:10:09.292 clat percentiles (usec): 00:10:09.292 | 1.00th=[ 121], 5.00th=[ 159], 10.00th=[ 169], 20.00th=[ 178], 00:10:09.292 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:10:09.292 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 221], 00:10:09.292 | 99.00th=[ 241], 99.50th=[ 247], 99.90th=[ 269], 99.95th=[ 289], 00:10:09.292 | 99.99th=[ 326] 00:10:09.292 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:09.292 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:09.292 lat (usec) : 250=61.67%, 500=38.15%, 750=0.08%, 1000=0.05% 00:10:09.292 lat (msec) : 2=0.03%, 4=0.03% 00:10:09.292 cpu : usr=1.60%, sys=7.50%, ctx=3959, majf=0, minf=13 00:10:09.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.292 issued rwts: total=1907,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.292 job2: (groupid=0, jobs=1): err= 0: pid=66658: Wed Nov 20 15:58:07 2024 00:10:09.292 read: IOPS=2696, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec) 00:10:09.292 slat (nsec): min=10786, max=45271, avg=13333.67, stdev=3340.99 00:10:09.292 clat (usec): min=147, max=1668, avg=176.36, stdev=34.98 00:10:09.292 lat (usec): min=159, max=1680, avg=189.69, stdev=35.44 00:10:09.292 clat percentiles (usec): 00:10:09.292 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:10:09.292 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:10:09.293 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:10:09.293 | 99.00th=[ 215], 99.50th=[ 219], 99.90th=[ 519], 99.95th=[ 832], 00:10:09.293 | 99.99th=[ 1663] 00:10:09.293 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:09.293 slat (usec): min=13, max=107, avg=21.53, stdev= 6.82 00:10:09.293 clat (usec): min=105, max=388, avg=134.25, stdev=14.44 00:10:09.293 lat (usec): min=123, max=436, avg=155.79, stdev=17.41 00:10:09.293 clat percentiles (usec): 00:10:09.293 | 1.00th=[ 111], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 124], 00:10:09.293 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:10:09.293 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:10:09.293 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 223], 99.95th=[ 388], 00:10:09.293 | 99.99th=[ 388] 00:10:09.293 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:09.293 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:09.293 lat (usec) : 250=99.84%, 500=0.10%, 750=0.02%, 1000=0.02% 00:10:09.293 lat (msec) : 2=0.02% 00:10:09.293 cpu : usr=2.50%, sys=7.90%, ctx=5771, majf=0, minf=5 00:10:09.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.293 issued rwts: total=2699,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.293 job3: (groupid=0, jobs=1): err= 0: pid=66660: Wed Nov 20 15:58:07 2024 00:10:09.293 read: IOPS=2768, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:10:09.293 slat (nsec): min=11115, max=47625, avg=13294.76, stdev=2468.51 00:10:09.293 clat (usec): min=144, max=1738, avg=172.17, stdev=33.31 00:10:09.293 lat (usec): min=156, max=1750, avg=185.46, stdev=33.47 00:10:09.293 clat percentiles (usec): 00:10:09.293 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:10:09.293 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:10:09.293 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 192], 00:10:09.293 | 99.00th=[ 206], 99.50th=[ 215], 99.90th=[ 285], 99.95th=[ 627], 00:10:09.293 | 99.99th=[ 1745] 00:10:09.293 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:09.293 slat (usec): min=14, max=181, avg=21.50, stdev= 9.90 00:10:09.293 clat (usec): min=2, max=279, avg=133.80, stdev=14.15 00:10:09.293 lat (usec): min=120, max=381, avg=155.30, stdev=16.12 00:10:09.293 clat percentiles (usec): 00:10:09.293 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 119], 20.00th=[ 124], 00:10:09.293 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:10:09.293 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:10:09.293 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 204], 99.95th=[ 253], 00:10:09.293 | 99.99th=[ 281] 00:10:09.293 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:09.293 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:09.293 lat (usec) : 4=0.02%, 50=0.09%, 100=0.09%, 250=99.71%, 500=0.07% 00:10:09.293 lat (usec) : 750=0.02% 00:10:09.293 lat (msec) : 2=0.02% 00:10:09.293 cpu : usr=1.70%, sys=8.50%, ctx=5862, majf=0, minf=15 00:10:09.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.293 issued rwts: total=2771,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.293 00:10:09.293 Run status group 0 (all jobs): 00:10:09.293 READ: bw=36.2MiB/s (38.0MB/s), 7620KiB/s-10.8MiB/s (7803kB/s-11.3MB/s), io=36.3MiB (38.0MB), run=1001-1001msec 00:10:09.293 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:09.293 00:10:09.293 Disk stats (read/write): 00:10:09.293 nvme0n1: ios=1586/1873, merge=0/0, ticks=446/388, in_queue=834, util=87.66% 00:10:09.293 nvme0n2: ios=1581/1878, merge=0/0, ticks=433/377, in_queue=810, util=88.64% 00:10:09.293 nvme0n3: ios=2353/2560, merge=0/0, ticks=419/372, in_queue=791, util=89.10% 00:10:09.293 nvme0n4: ios=2424/2560, merge=0/0, ticks=419/367, in_queue=786, util=89.75% 00:10:09.293 15:58:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:09.293 [global] 00:10:09.293 thread=1 00:10:09.293 invalidate=1 00:10:09.293 rw=randwrite 00:10:09.293 time_based=1 00:10:09.293 runtime=1 00:10:09.293 ioengine=libaio 00:10:09.293 direct=1 00:10:09.293 bs=4096 00:10:09.293 iodepth=1 00:10:09.293 norandommap=0 00:10:09.293 numjobs=1 00:10:09.293 00:10:09.293 verify_dump=1 00:10:09.293 verify_backlog=512 00:10:09.293 verify_state_save=0 00:10:09.293 do_verify=1 00:10:09.293 verify=crc32c-intel 00:10:09.293 [job0] 00:10:09.293 filename=/dev/nvme0n1 00:10:09.293 [job1] 00:10:09.293 filename=/dev/nvme0n2 00:10:09.293 [job2] 00:10:09.293 filename=/dev/nvme0n3 00:10:09.293 [job3] 00:10:09.293 filename=/dev/nvme0n4 00:10:09.293 Could not set queue depth (nvme0n1) 00:10:09.293 Could not set queue depth (nvme0n2) 00:10:09.293 Could not set queue depth (nvme0n3) 00:10:09.293 Could not set queue depth (nvme0n4) 00:10:09.293 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.293 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.293 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.293 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.293 fio-3.35 00:10:09.293 Starting 4 threads 00:10:10.674 00:10:10.674 job0: (groupid=0, jobs=1): err= 0: pid=66717: Wed Nov 20 15:58:08 2024 00:10:10.674 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:10.674 slat (nsec): min=16466, max=56559, avg=25526.60, stdev=5268.68 00:10:10.674 clat (usec): min=145, max=2292, avg=310.53, stdev=130.78 00:10:10.674 lat (usec): min=166, max=2322, avg=336.05, stdev=132.71 00:10:10.674 clat percentiles (usec): 00:10:10.674 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:10:10.674 | 30.00th=[ 190], 40.00th=[ 306], 50.00th=[ 330], 60.00th=[ 343], 00:10:10.674 | 70.00th=[ 359], 80.00th=[ 388], 90.00th=[ 469], 95.00th=[ 494], 00:10:10.674 | 99.00th=[ 693], 99.50th=[ 701], 99.90th=[ 1254], 99.95th=[ 2278], 00:10:10.674 | 99.99th=[ 2278] 00:10:10.674 write: IOPS=2011, BW=8048KiB/s (8241kB/s)(8056KiB/1001msec); 0 zone resets 00:10:10.674 slat (usec): min=18, max=117, avg=33.26, stdev= 7.33 00:10:10.674 clat (usec): min=96, max=945, avg=201.51, stdev=74.24 00:10:10.674 lat (usec): min=127, max=971, avg=234.77, stdev=76.06 00:10:10.674 clat percentiles (usec): 00:10:10.674 | 1.00th=[ 106], 5.00th=[ 115], 10.00th=[ 120], 20.00th=[ 129], 00:10:10.674 | 30.00th=[ 143], 40.00th=[ 157], 50.00th=[ 186], 60.00th=[ 241], 00:10:10.674 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:10:10.674 | 99.00th=[ 343], 99.50th=[ 478], 99.90th=[ 734], 99.95th=[ 734], 00:10:10.674 | 99.99th=[ 947] 00:10:10.674 bw ( KiB/s): min= 8192, max= 8192, per=25.63%, avg=8192.00, stdev= 0.00, samples=1 00:10:10.674 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:10.674 lat (usec) : 100=0.08%, 250=51.30%, 500=46.68%, 750=1.80%, 1000=0.08% 00:10:10.674 lat (msec) : 2=0.03%, 4=0.03% 00:10:10.674 cpu : usr=2.90%, sys=8.20%, ctx=3550, majf=0, minf=5 00:10:10.674 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.675 issued rwts: total=1536,2014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.675 job1: (groupid=0, jobs=1): err= 0: pid=66718: Wed Nov 20 15:58:08 2024 00:10:10.675 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:10.675 slat (nsec): min=11720, max=55883, avg=23719.58, stdev=5064.99 00:10:10.675 clat (usec): min=141, max=3298, avg=296.45, stdev=147.87 00:10:10.675 lat (usec): min=170, max=3344, avg=320.17, stdev=146.97 00:10:10.675 clat percentiles (usec): 00:10:10.675 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 172], 00:10:10.675 | 30.00th=[ 188], 40.00th=[ 306], 50.00th=[ 330], 60.00th=[ 338], 00:10:10.675 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 392], 95.00th=[ 420], 00:10:10.675 | 99.00th=[ 478], 99.50th=[ 515], 99.90th=[ 3261], 99.95th=[ 3294], 00:10:10.675 | 99.99th=[ 3294] 00:10:10.675 write: IOPS=1956, BW=7824KiB/s (8012kB/s)(7832KiB/1001msec); 0 zone resets 00:10:10.675 slat (nsec): min=15074, max=81648, avg=31548.87, stdev=8024.89 00:10:10.675 clat (usec): min=97, max=7717, avg=222.89, stdev=289.84 00:10:10.675 lat (usec): min=133, max=7761, avg=254.44, stdev=289.65 00:10:10.675 clat percentiles (usec): 00:10:10.675 | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 121], 00:10:10.675 | 30.00th=[ 127], 40.00th=[ 145], 50.00th=[ 243], 60.00th=[ 265], 00:10:10.675 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 318], 00:10:10.675 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 7504], 99.95th=[ 7701], 00:10:10.675 | 99.99th=[ 7701] 00:10:10.675 bw ( KiB/s): min= 7712, max= 7712, per=24.13%, avg=7712.00, stdev= 0.00, samples=1 00:10:10.675 iops : min= 1928, max= 1928, avg=1928.00, stdev= 0.00, samples=1 00:10:10.675 lat (usec) : 100=0.09%, 250=43.79%, 500=55.67%, 750=0.23%, 1000=0.03% 00:10:10.675 lat (msec) : 2=0.06%, 4=0.06%, 10=0.09% 00:10:10.675 cpu : usr=2.50%, sys=8.10%, ctx=3495, majf=0, minf=21 00:10:10.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.675 issued rwts: total=1536,1958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.675 job2: (groupid=0, jobs=1): err= 0: pid=66719: Wed Nov 20 15:58:08 2024 00:10:10.675 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:10.675 slat (nsec): min=10901, max=76807, avg=18307.85, stdev=4155.61 00:10:10.675 clat (usec): min=147, max=768, avg=222.63, stdev=62.64 00:10:10.675 lat (usec): min=159, max=793, avg=240.94, stdev=62.91 00:10:10.675 clat percentiles (usec): 00:10:10.675 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:10:10.675 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 196], 60.00th=[ 215], 00:10:10.675 | 70.00th=[ 235], 80.00th=[ 258], 90.00th=[ 334], 95.00th=[ 351], 00:10:10.675 | 99.00th=[ 375], 99.50th=[ 449], 99.90th=[ 553], 99.95th=[ 570], 00:10:10.675 | 99.99th=[ 766] 00:10:10.675 write: IOPS=2487, BW=9950KiB/s (10.2MB/s)(9960KiB/1001msec); 0 zone resets 00:10:10.675 slat (usec): min=14, max=132, avg=26.73, stdev= 7.23 00:10:10.675 clat (usec): min=79, max=425, avg=172.57, stdev=75.31 00:10:10.675 lat (usec): min=120, max=451, avg=199.30, stdev=75.30 00:10:10.675 clat percentiles (usec): 00:10:10.675 | 1.00th=[ 110], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 126], 00:10:10.675 | 30.00th=[ 130], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 145], 00:10:10.675 | 70.00th=[ 159], 80.00th=[ 194], 90.00th=[ 330], 95.00th=[ 347], 00:10:10.675 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 388], 99.95th=[ 396], 00:10:10.675 | 99.99th=[ 424] 00:10:10.675 bw ( KiB/s): min=12288, max=12288, per=38.45%, avg=12288.00, stdev= 0.00, samples=1 00:10:10.675 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:10.675 lat (usec) : 100=0.02%, 250=80.87%, 500=19.00%, 750=0.09%, 1000=0.02% 00:10:10.675 cpu : usr=2.20%, sys=8.90%, ctx=4567, majf=0, minf=7 00:10:10.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.675 issued rwts: total=2048,2490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.675 job3: (groupid=0, jobs=1): err= 0: pid=66720: Wed Nov 20 15:58:08 2024 00:10:10.675 read: IOPS=1430, BW=5722KiB/s (5860kB/s)(5728KiB/1001msec) 00:10:10.675 slat (nsec): min=8631, max=44860, avg=18008.07, stdev=4090.29 00:10:10.675 clat (usec): min=217, max=3198, avg=357.32, stdev=92.28 00:10:10.675 lat (usec): min=234, max=3212, avg=375.33, stdev=92.50 00:10:10.675 clat percentiles (usec): 00:10:10.675 | 1.00th=[ 258], 5.00th=[ 297], 10.00th=[ 314], 20.00th=[ 326], 00:10:10.675 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 355], 00:10:10.675 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 408], 95.00th=[ 437], 00:10:10.675 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 1549], 99.95th=[ 3195], 00:10:10.675 | 99.99th=[ 3195] 00:10:10.675 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:10.675 slat (nsec): min=10682, max=97920, avg=27521.88, stdev=7693.88 00:10:10.675 clat (usec): min=124, max=952, avg=269.26, stdev=60.68 00:10:10.675 lat (usec): min=140, max=984, avg=296.79, stdev=64.67 00:10:10.675 clat percentiles (usec): 00:10:10.675 | 1.00th=[ 133], 5.00th=[ 159], 10.00th=[ 184], 20.00th=[ 225], 00:10:10.675 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:10:10.675 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 338], 95.00th=[ 359], 00:10:10.675 | 99.00th=[ 400], 99.50th=[ 412], 99.90th=[ 537], 99.95th=[ 955], 00:10:10.675 | 99.99th=[ 955] 00:10:10.675 bw ( KiB/s): min= 7728, max= 7728, per=24.18%, avg=7728.00, stdev= 0.00, samples=1 00:10:10.675 iops : min= 1932, max= 1932, avg=1932.00, stdev= 0.00, samples=1 00:10:10.675 lat (usec) : 250=14.69%, 500=84.67%, 750=0.54%, 1000=0.03% 00:10:10.675 lat (msec) : 2=0.03%, 4=0.03% 00:10:10.675 cpu : usr=1.90%, sys=5.60%, ctx=2971, majf=0, minf=11 00:10:10.675 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.675 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.675 issued rwts: total=1432,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.675 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.675 00:10:10.675 Run status group 0 (all jobs): 00:10:10.675 READ: bw=25.6MiB/s (26.8MB/s), 5722KiB/s-8184KiB/s (5860kB/s-8380kB/s), io=25.6MiB (26.8MB), run=1001-1001msec 00:10:10.675 WRITE: bw=31.2MiB/s (32.7MB/s), 6138KiB/s-9950KiB/s (6285kB/s-10.2MB/s), io=31.2MiB (32.8MB), run=1001-1001msec 00:10:10.675 00:10:10.675 Disk stats (read/write): 00:10:10.675 nvme0n1: ios=1261/1536, merge=0/0, ticks=461/364, in_queue=825, util=87.98% 00:10:10.675 nvme0n2: ios=1232/1536, merge=0/0, ticks=427/364, in_queue=791, util=87.39% 00:10:10.675 nvme0n3: ios=2048/2098, merge=0/0, ticks=468/307, in_queue=775, util=88.96% 00:10:10.675 nvme0n4: ios=1062/1536, merge=0/0, ticks=351/419, in_queue=770, util=89.59% 00:10:10.675 15:58:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:10.675 [global] 00:10:10.675 thread=1 00:10:10.675 invalidate=1 00:10:10.675 rw=write 00:10:10.675 time_based=1 00:10:10.675 runtime=1 00:10:10.675 ioengine=libaio 00:10:10.675 direct=1 00:10:10.675 bs=4096 00:10:10.675 iodepth=128 00:10:10.675 norandommap=0 00:10:10.675 numjobs=1 00:10:10.675 00:10:10.675 verify_dump=1 00:10:10.675 verify_backlog=512 00:10:10.675 verify_state_save=0 00:10:10.675 do_verify=1 00:10:10.675 verify=crc32c-intel 00:10:10.675 [job0] 00:10:10.675 filename=/dev/nvme0n1 00:10:10.675 [job1] 00:10:10.675 filename=/dev/nvme0n2 00:10:10.675 [job2] 00:10:10.675 filename=/dev/nvme0n3 00:10:10.675 [job3] 00:10:10.675 filename=/dev/nvme0n4 00:10:10.675 Could not set queue depth (nvme0n1) 00:10:10.675 Could not set queue depth (nvme0n2) 00:10:10.675 Could not set queue depth (nvme0n3) 00:10:10.675 Could not set queue depth (nvme0n4) 00:10:10.675 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.675 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.675 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.675 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:10.675 fio-3.35 00:10:10.675 Starting 4 threads 00:10:12.046 00:10:12.046 job0: (groupid=0, jobs=1): err= 0: pid=66775: Wed Nov 20 15:58:09 2024 00:10:12.046 read: IOPS=4542, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1003msec) 00:10:12.046 slat (usec): min=5, max=3729, avg=106.33, stdev=434.45 00:10:12.046 clat (usec): min=1858, max=25842, avg=14026.08, stdev=2775.87 00:10:12.046 lat (usec): min=1871, max=25856, avg=14132.41, stdev=2812.76 00:10:12.046 clat percentiles (usec): 00:10:12.046 | 1.00th=[ 6325], 5.00th=[11338], 10.00th=[12256], 20.00th=[12649], 00:10:12.046 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13829], 00:10:12.046 | 70.00th=[14091], 80.00th=[15401], 90.00th=[17433], 95.00th=[20579], 00:10:12.046 | 99.00th=[22414], 99.50th=[22938], 99.90th=[25560], 99.95th=[25822], 00:10:12.046 | 99.99th=[25822] 00:10:12.046 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:12.046 slat (usec): min=9, max=6325, avg=104.30, stdev=506.39 00:10:12.046 clat (usec): min=9426, max=26137, avg=13601.20, stdev=2061.64 00:10:12.046 lat (usec): min=9449, max=26172, avg=13705.50, stdev=2124.58 00:10:12.046 clat percentiles (usec): 00:10:12.046 | 1.00th=[10552], 5.00th=[11469], 10.00th=[11731], 20.00th=[12125], 00:10:12.046 | 30.00th=[12387], 40.00th=[12780], 50.00th=[13173], 60.00th=[13435], 00:10:12.046 | 70.00th=[13829], 80.00th=[14484], 90.00th=[16057], 95.00th=[17957], 00:10:12.046 | 99.00th=[20579], 99.50th=[20579], 99.90th=[23725], 99.95th=[25822], 00:10:12.046 | 99.99th=[26084] 00:10:12.046 bw ( KiB/s): min=16992, max=19872, per=25.02%, avg=18432.00, stdev=2036.47, samples=2 00:10:12.046 iops : min= 4248, max= 4968, avg=4608.00, stdev=509.12, samples=2 00:10:12.046 lat (msec) : 2=0.08%, 4=0.13%, 10=0.93%, 20=94.03%, 50=4.83% 00:10:12.046 cpu : usr=3.59%, sys=12.97%, ctx=381, majf=0, minf=8 00:10:12.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:12.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.046 issued rwts: total=4556,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.046 job1: (groupid=0, jobs=1): err= 0: pid=66776: Wed Nov 20 15:58:09 2024 00:10:12.046 read: IOPS=4664, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1003msec) 00:10:12.046 slat (usec): min=5, max=7025, avg=100.13, stdev=413.37 00:10:12.046 clat (usec): min=2198, max=20539, avg=12992.53, stdev=1791.93 00:10:12.046 lat (usec): min=2210, max=20559, avg=13092.67, stdev=1824.65 00:10:12.046 clat percentiles (usec): 00:10:12.046 | 1.00th=[ 6456], 5.00th=[10421], 10.00th=[11600], 20.00th=[12256], 00:10:12.046 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13042], 60.00th=[13173], 00:10:12.046 | 70.00th=[13435], 80.00th=[13698], 90.00th=[15008], 95.00th=[15664], 00:10:12.046 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:10:12.046 | 99.99th=[20579] 00:10:12.046 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:12.046 slat (usec): min=8, max=6420, avg=96.15, stdev=423.95 00:10:12.046 clat (usec): min=6901, max=17535, avg=12863.23, stdev=1079.58 00:10:12.046 lat (usec): min=6937, max=18829, avg=12959.38, stdev=1139.84 00:10:12.046 clat percentiles (usec): 00:10:12.046 | 1.00th=[10290], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:10:12.046 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:10:12.046 | 70.00th=[13173], 80.00th=[13304], 90.00th=[14091], 95.00th=[15270], 00:10:12.046 | 99.00th=[16581], 99.50th=[16712], 99.90th=[17171], 99.95th=[17433], 00:10:12.046 | 99.99th=[17433] 00:10:12.046 bw ( KiB/s): min=20024, max=20480, per=27.49%, avg=20252.00, stdev=322.44, samples=2 00:10:12.046 iops : min= 5006, max= 5120, avg=5063.00, stdev=80.61, samples=2 00:10:12.046 lat (msec) : 4=0.31%, 10=1.42%, 20=98.26%, 50=0.01% 00:10:12.046 cpu : usr=4.69%, sys=13.27%, ctx=494, majf=0, minf=5 00:10:12.046 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:12.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.046 issued rwts: total=4678,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.046 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.046 job2: (groupid=0, jobs=1): err= 0: pid=66777: Wed Nov 20 15:58:09 2024 00:10:12.046 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:10:12.047 slat (usec): min=8, max=3971, avg=113.91, stdev=560.48 00:10:12.047 clat (usec): min=11051, max=17299, avg=14991.62, stdev=919.08 00:10:12.047 lat (usec): min=13775, max=17326, avg=15105.52, stdev=740.81 00:10:12.047 clat percentiles (usec): 00:10:12.047 | 1.00th=[11600], 5.00th=[13960], 10.00th=[14353], 20.00th=[14484], 00:10:12.047 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[14877], 00:10:12.047 | 70.00th=[15139], 80.00th=[15795], 90.00th=[16188], 95.00th=[16581], 00:10:12.047 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17171], 99.95th=[17171], 00:10:12.047 | 99.99th=[17171] 00:10:12.047 write: IOPS=4508, BW=17.6MiB/s (18.5MB/s)(17.6MiB/1001msec); 0 zone resets 00:10:12.047 slat (usec): min=11, max=3664, avg=111.33, stdev=496.15 00:10:12.047 clat (usec): min=314, max=16918, avg=14403.08, stdev=1502.05 00:10:12.047 lat (usec): min=3122, max=16954, avg=14514.41, stdev=1418.26 00:10:12.047 clat percentiles (usec): 00:10:12.047 | 1.00th=[ 7046], 5.00th=[12911], 10.00th=[13829], 20.00th=[13960], 00:10:12.047 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 00:10:12.047 | 70.00th=[14746], 80.00th=[15401], 90.00th=[16057], 95.00th=[16319], 00:10:12.047 | 99.00th=[16712], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:10:12.047 | 99.99th=[16909] 00:10:12.047 bw ( KiB/s): min=16384, max=16384, per=22.24%, avg=16384.00, stdev= 0.00, samples=1 00:10:12.047 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:10:12.047 lat (usec) : 500=0.01% 00:10:12.047 lat (msec) : 4=0.36%, 10=0.38%, 20=99.24% 00:10:12.047 cpu : usr=3.50%, sys=11.70%, ctx=272, majf=0, minf=10 00:10:12.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:12.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.047 issued rwts: total=4096,4513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.047 job3: (groupid=0, jobs=1): err= 0: pid=66778: Wed Nov 20 15:58:09 2024 00:10:12.047 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:10:12.047 slat (usec): min=5, max=7014, avg=120.32, stdev=558.25 00:10:12.047 clat (usec): min=11004, max=23405, avg=15824.18, stdev=1895.30 00:10:12.047 lat (usec): min=11206, max=23429, avg=15944.51, stdev=1907.03 00:10:12.047 clat percentiles (usec): 00:10:12.047 | 1.00th=[11863], 5.00th=[13042], 10.00th=[13829], 20.00th=[14484], 00:10:12.047 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15664], 00:10:12.047 | 70.00th=[16712], 80.00th=[17695], 90.00th=[18482], 95.00th=[19268], 00:10:12.047 | 99.00th=[20317], 99.50th=[22414], 99.90th=[22938], 99.95th=[22938], 00:10:12.047 | 99.99th=[23462] 00:10:12.047 write: IOPS=4225, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1002msec); 0 zone resets 00:10:12.047 slat (usec): min=8, max=7406, avg=112.05, stdev=685.45 00:10:12.047 clat (usec): min=482, max=23365, avg=14619.21, stdev=1972.42 00:10:12.047 lat (usec): min=6041, max=23409, avg=14731.26, stdev=2066.56 00:10:12.047 clat percentiles (usec): 00:10:12.047 | 1.00th=[ 7046], 5.00th=[11863], 10.00th=[12911], 20.00th=[13698], 00:10:12.047 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14615], 60.00th=[14746], 00:10:12.047 | 70.00th=[15270], 80.00th=[16057], 90.00th=[16712], 95.00th=[17433], 00:10:12.047 | 99.00th=[20579], 99.50th=[21627], 99.90th=[22938], 99.95th=[23200], 00:10:12.047 | 99.99th=[23462] 00:10:12.047 bw ( KiB/s): min=16384, max=16536, per=22.34%, avg=16460.00, stdev=107.48, samples=2 00:10:12.047 iops : min= 4096, max= 4134, avg=4115.00, stdev=26.87, samples=2 00:10:12.047 lat (usec) : 500=0.01% 00:10:12.047 lat (msec) : 10=1.15%, 20=97.70%, 50=1.14% 00:10:12.047 cpu : usr=4.10%, sys=11.39%, ctx=254, majf=0, minf=13 00:10:12.047 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:12.047 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.047 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.047 issued rwts: total=4096,4234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.047 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.047 00:10:12.047 Run status group 0 (all jobs): 00:10:12.047 READ: bw=67.9MiB/s (71.2MB/s), 16.0MiB/s-18.2MiB/s (16.7MB/s-19.1MB/s), io=68.1MiB (71.4MB), run=1001-1003msec 00:10:12.047 WRITE: bw=72.0MiB/s (75.4MB/s), 16.5MiB/s-19.9MiB/s (17.3MB/s-20.9MB/s), io=72.2MiB (75.7MB), run=1001-1003msec 00:10:12.047 00:10:12.047 Disk stats (read/write): 00:10:12.047 nvme0n1: ios=3760/4096, merge=0/0, ticks=17178/15766, in_queue=32944, util=89.28% 00:10:12.047 nvme0n2: ios=4135/4354, merge=0/0, ticks=17105/16310, in_queue=33415, util=88.17% 00:10:12.047 nvme0n3: ios=3601/3808, merge=0/0, ticks=12490/12286, in_queue=24776, util=89.73% 00:10:12.047 nvme0n4: ios=3488/3584, merge=0/0, ticks=26958/22707, in_queue=49665, util=89.74% 00:10:12.047 15:58:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:12.047 [global] 00:10:12.047 thread=1 00:10:12.047 invalidate=1 00:10:12.047 rw=randwrite 00:10:12.047 time_based=1 00:10:12.047 runtime=1 00:10:12.047 ioengine=libaio 00:10:12.047 direct=1 00:10:12.047 bs=4096 00:10:12.047 iodepth=128 00:10:12.047 norandommap=0 00:10:12.047 numjobs=1 00:10:12.047 00:10:12.047 verify_dump=1 00:10:12.047 verify_backlog=512 00:10:12.047 verify_state_save=0 00:10:12.047 do_verify=1 00:10:12.047 verify=crc32c-intel 00:10:12.047 [job0] 00:10:12.047 filename=/dev/nvme0n1 00:10:12.047 [job1] 00:10:12.047 filename=/dev/nvme0n2 00:10:12.047 [job2] 00:10:12.047 filename=/dev/nvme0n3 00:10:12.047 [job3] 00:10:12.047 filename=/dev/nvme0n4 00:10:12.047 Could not set queue depth (nvme0n1) 00:10:12.047 Could not set queue depth (nvme0n2) 00:10:12.047 Could not set queue depth (nvme0n3) 00:10:12.047 Could not set queue depth (nvme0n4) 00:10:12.047 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.047 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.047 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.047 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.047 fio-3.35 00:10:12.047 Starting 4 threads 00:10:13.420 00:10:13.420 job0: (groupid=0, jobs=1): err= 0: pid=66837: Wed Nov 20 15:58:11 2024 00:10:13.420 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:10:13.420 slat (usec): min=6, max=6864, avg=101.75, stdev=638.08 00:10:13.420 clat (usec): min=8433, max=23932, avg=14211.12, stdev=1612.38 00:10:13.420 lat (usec): min=8456, max=28554, avg=14312.87, stdev=1647.46 00:10:13.420 clat percentiles (usec): 00:10:13.420 | 1.00th=[ 8979], 5.00th=[12649], 10.00th=[13304], 20.00th=[13698], 00:10:13.420 | 30.00th=[13960], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:10:13.420 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15270], 95.00th=[15664], 00:10:13.420 | 99.00th=[21890], 99.50th=[22676], 99.90th=[23987], 99.95th=[23987], 00:10:13.420 | 99.99th=[23987] 00:10:13.420 write: IOPS=4777, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1004msec); 0 zone resets 00:10:13.420 slat (usec): min=4, max=11628, avg=103.46, stdev=626.67 00:10:13.420 clat (usec): min=752, max=19952, avg=12874.18, stdev=1723.19 00:10:13.420 lat (usec): min=6098, max=19990, avg=12977.63, stdev=1644.52 00:10:13.420 clat percentiles (usec): 00:10:13.420 | 1.00th=[ 6980], 5.00th=[10290], 10.00th=[11469], 20.00th=[12256], 00:10:13.420 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12911], 60.00th=[13173], 00:10:13.420 | 70.00th=[13435], 80.00th=[13698], 90.00th=[13960], 95.00th=[14615], 00:10:13.420 | 99.00th=[19268], 99.50th=[19792], 99.90th=[20055], 99.95th=[20055], 00:10:13.420 | 99.99th=[20055] 00:10:13.420 bw ( KiB/s): min=17472, max=19960, per=24.60%, avg=18716.00, stdev=1759.28, samples=2 00:10:13.420 iops : min= 4368, max= 4990, avg=4679.00, stdev=439.82, samples=2 00:10:13.420 lat (usec) : 1000=0.01% 00:10:13.420 lat (msec) : 10=4.17%, 20=94.96%, 50=0.86% 00:10:13.420 cpu : usr=4.29%, sys=13.06%, ctx=205, majf=0, minf=7 00:10:13.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:13.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.420 issued rwts: total=4608,4797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.420 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.420 job1: (groupid=0, jobs=1): err= 0: pid=66838: Wed Nov 20 15:58:11 2024 00:10:13.420 read: IOPS=5092, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:13.420 slat (usec): min=8, max=3132, avg=95.41, stdev=457.29 00:10:13.420 clat (usec): min=1876, max=14418, avg=12564.37, stdev=1095.20 00:10:13.420 lat (usec): min=1890, max=14430, avg=12659.78, stdev=1000.16 00:10:13.420 clat percentiles (usec): 00:10:13.420 | 1.00th=[ 5342], 5.00th=[11469], 10.00th=[12256], 20.00th=[12518], 00:10:13.420 | 30.00th=[12649], 40.00th=[12649], 50.00th=[12780], 60.00th=[12780], 00:10:13.420 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13042], 95.00th=[13173], 00:10:13.420 | 99.00th=[13435], 99.50th=[13566], 99.90th=[14353], 99.95th=[14353], 00:10:13.420 | 99.99th=[14484] 00:10:13.420 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:13.420 slat (usec): min=12, max=2964, avg=92.69, stdev=394.02 00:10:13.420 clat (usec): min=9399, max=13184, avg=12200.91, stdev=510.38 00:10:13.420 lat (usec): min=9638, max=13244, avg=12293.60, stdev=325.21 00:10:13.420 clat percentiles (usec): 00:10:13.420 | 1.00th=[ 9896], 5.00th=[11600], 10.00th=[11863], 20.00th=[11994], 00:10:13.420 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12256], 60.00th=[12387], 00:10:13.420 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12649], 95.00th=[12780], 00:10:13.420 | 99.00th=[13042], 99.50th=[13042], 99.90th=[13173], 99.95th=[13173], 00:10:13.420 | 99.99th=[13173] 00:10:13.420 bw ( KiB/s): min=20480, max=20480, per=26.92%, avg=20480.00, stdev= 0.00, samples=2 00:10:13.420 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:13.420 lat (msec) : 2=0.08%, 4=0.12%, 10=1.81%, 20=98.00% 00:10:13.420 cpu : usr=3.69%, sys=13.97%, ctx=320, majf=0, minf=3 00:10:13.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:13.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.420 issued rwts: total=5108,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.420 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.420 job2: (groupid=0, jobs=1): err= 0: pid=66839: Wed Nov 20 15:58:11 2024 00:10:13.420 read: IOPS=4201, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1005msec) 00:10:13.420 slat (usec): min=5, max=6860, avg=108.09, stdev=698.92 00:10:13.420 clat (usec): min=1616, max=24560, avg=14942.40, stdev=1864.95 00:10:13.420 lat (usec): min=7949, max=29058, avg=15050.49, stdev=1890.03 00:10:13.420 clat percentiles (usec): 00:10:13.420 | 1.00th=[ 8586], 5.00th=[10290], 10.00th=[13960], 20.00th=[14484], 00:10:13.420 | 30.00th=[14746], 40.00th=[14877], 50.00th=[15139], 60.00th=[15270], 00:10:13.420 | 70.00th=[15401], 80.00th=[15664], 90.00th=[15926], 95.00th=[16450], 00:10:13.420 | 99.00th=[23200], 99.50th=[23725], 99.90th=[24511], 99.95th=[24511], 00:10:13.420 | 99.99th=[24511] 00:10:13.420 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:10:13.420 slat (usec): min=9, max=11362, avg=110.82, stdev=684.42 00:10:13.420 clat (usec): min=6975, max=20164, avg=13948.51, stdev=1381.00 00:10:13.420 lat (usec): min=9531, max=20188, avg=14059.33, stdev=1243.21 00:10:13.420 clat percentiles (usec): 00:10:13.420 | 1.00th=[ 8979], 5.00th=[12387], 10.00th=[12780], 20.00th=[13304], 00:10:13.420 | 30.00th=[13566], 40.00th=[13829], 50.00th=[13960], 60.00th=[14091], 00:10:13.420 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[16057], 00:10:13.421 | 99.00th=[19792], 99.50th=[19792], 99.90th=[20055], 99.95th=[20055], 00:10:13.421 | 99.99th=[20055] 00:10:13.421 bw ( KiB/s): min=17912, max=18944, per=24.22%, avg=18428.00, stdev=729.73, samples=2 00:10:13.421 iops : min= 4478, max= 4736, avg=4607.00, stdev=182.43, samples=2 00:10:13.421 lat (msec) : 2=0.01%, 10=2.96%, 20=95.98%, 50=1.05% 00:10:13.421 cpu : usr=4.08%, sys=12.15%, ctx=178, majf=0, minf=7 00:10:13.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:13.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.421 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.421 job3: (groupid=0, jobs=1): err= 0: pid=66840: Wed Nov 20 15:58:11 2024 00:10:13.421 read: IOPS=4323, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1006msec) 00:10:13.421 slat (usec): min=9, max=7586, avg=107.20, stdev=701.99 00:10:13.421 clat (usec): min=1453, max=23643, avg=14642.76, stdev=1787.31 00:10:13.421 lat (usec): min=7230, max=28446, avg=14749.96, stdev=1808.27 00:10:13.421 clat percentiles (usec): 00:10:13.421 | 1.00th=[ 7963], 5.00th=[10290], 10.00th=[13829], 20.00th=[14222], 00:10:13.421 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14746], 60.00th=[15008], 00:10:13.421 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15664], 95.00th=[16057], 00:10:13.421 | 99.00th=[22152], 99.50th=[22676], 99.90th=[23725], 99.95th=[23725], 00:10:13.421 | 99.99th=[23725] 00:10:13.421 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:10:13.421 slat (usec): min=7, max=13322, avg=109.91, stdev=689.42 00:10:13.421 clat (usec): min=7181, max=23234, avg=13846.95, stdev=1778.03 00:10:13.421 lat (usec): min=9581, max=23257, avg=13956.86, stdev=1673.21 00:10:13.421 clat percentiles (usec): 00:10:13.421 | 1.00th=[ 8848], 5.00th=[11994], 10.00th=[12387], 20.00th=[12780], 00:10:13.421 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13829], 60.00th=[13960], 00:10:13.421 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15926], 95.00th=[16909], 00:10:13.421 | 99.00th=[22676], 99.50th=[22938], 99.90th=[23200], 99.95th=[23200], 00:10:13.421 | 99.99th=[23200] 00:10:13.421 bw ( KiB/s): min=17912, max=18952, per=24.23%, avg=18432.00, stdev=735.39, samples=2 00:10:13.421 iops : min= 4478, max= 4738, avg=4608.00, stdev=183.85, samples=2 00:10:13.421 lat (msec) : 2=0.01%, 10=3.36%, 20=95.10%, 50=1.53% 00:10:13.421 cpu : usr=3.58%, sys=11.24%, ctx=246, majf=0, minf=6 00:10:13.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:13.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.421 issued rwts: total=4349,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.421 00:10:13.421 Run status group 0 (all jobs): 00:10:13.421 READ: bw=71.0MiB/s (74.5MB/s), 16.4MiB/s-19.9MiB/s (17.2MB/s-20.9MB/s), io=71.4MiB (74.9MB), run=1003-1006msec 00:10:13.421 WRITE: bw=74.3MiB/s (77.9MB/s), 17.9MiB/s-19.9MiB/s (18.8MB/s-20.9MB/s), io=74.7MiB (78.4MB), run=1003-1006msec 00:10:13.421 00:10:13.421 Disk stats (read/write): 00:10:13.421 nvme0n1: ios=3884/4096, merge=0/0, ticks=52010/49281, in_queue=101291, util=89.28% 00:10:13.421 nvme0n2: ios=4122/4608, merge=0/0, ticks=11947/12000, in_queue=23947, util=87.22% 00:10:13.421 nvme0n3: ios=3601/3840, merge=0/0, ticks=51383/49811, in_queue=101194, util=89.29% 00:10:13.421 nvme0n4: ios=3584/3904, merge=0/0, ticks=50245/50828, in_queue=101073, util=89.62% 00:10:13.421 15:58:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:13.421 15:58:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66853 00:10:13.421 15:58:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:13.421 15:58:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:13.421 [global] 00:10:13.421 thread=1 00:10:13.421 invalidate=1 00:10:13.421 rw=read 00:10:13.421 time_based=1 00:10:13.421 runtime=10 00:10:13.421 ioengine=libaio 00:10:13.421 direct=1 00:10:13.421 bs=4096 00:10:13.421 iodepth=1 00:10:13.421 norandommap=1 00:10:13.421 numjobs=1 00:10:13.421 00:10:13.421 [job0] 00:10:13.421 filename=/dev/nvme0n1 00:10:13.421 [job1] 00:10:13.421 filename=/dev/nvme0n2 00:10:13.421 [job2] 00:10:13.421 filename=/dev/nvme0n3 00:10:13.421 [job3] 00:10:13.421 filename=/dev/nvme0n4 00:10:13.421 Could not set queue depth (nvme0n1) 00:10:13.421 Could not set queue depth (nvme0n2) 00:10:13.421 Could not set queue depth (nvme0n3) 00:10:13.421 Could not set queue depth (nvme0n4) 00:10:13.421 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.421 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.421 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.421 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.421 fio-3.35 00:10:13.421 Starting 4 threads 00:10:16.701 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:16.701 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42385408, buflen=4096 00:10:16.701 fio: pid=66900, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.701 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:16.701 fio: pid=66899, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.701 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=69791744, buflen=4096 00:10:16.701 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.701 15:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:16.959 fio: pid=66897, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:16.959 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11530240, buflen=4096 00:10:16.959 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:16.959 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:17.218 fio: pid=66898, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:17.218 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=61083648, buflen=4096 00:10:17.218 00:10:17.218 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66897: Wed Nov 20 15:58:15 2024 00:10:17.218 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(75.0MiB/3443msec) 00:10:17.218 slat (usec): min=10, max=13088, avg=15.05, stdev=155.03 00:10:17.218 clat (usec): min=133, max=3413, avg=163.05, stdev=40.07 00:10:17.218 lat (usec): min=148, max=13333, avg=178.10, stdev=160.95 00:10:17.218 clat percentiles (usec): 00:10:17.218 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:10:17.218 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:10:17.218 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 182], 00:10:17.218 | 99.00th=[ 196], 99.50th=[ 212], 99.90th=[ 486], 99.95th=[ 824], 00:10:17.218 | 99.99th=[ 2835] 00:10:17.218 bw ( KiB/s): min=21720, max=23016, per=34.51%, avg=22678.67, stdev=483.86, samples=6 00:10:17.218 iops : min= 5430, max= 5754, avg=5669.67, stdev=120.97, samples=6 00:10:17.218 lat (usec) : 250=99.59%, 500=0.32%, 750=0.03%, 1000=0.03% 00:10:17.218 lat (msec) : 2=0.02%, 4=0.01% 00:10:17.218 cpu : usr=1.66%, sys=6.25%, ctx=19205, majf=0, minf=1 00:10:17.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.218 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.218 issued rwts: total=19200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.218 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66898: Wed Nov 20 15:58:15 2024 00:10:17.218 read: IOPS=3984, BW=15.6MiB/s (16.3MB/s)(58.3MiB/3743msec) 00:10:17.218 slat (usec): min=10, max=15846, avg=18.04, stdev=215.90 00:10:17.218 clat (usec): min=123, max=2797, avg=231.46, stdev=69.42 00:10:17.218 lat (usec): min=135, max=16003, avg=249.50, stdev=226.06 00:10:17.218 clat percentiles (usec): 00:10:17.218 | 1.00th=[ 131], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 153], 00:10:17.218 | 30.00th=[ 178], 40.00th=[ 247], 50.00th=[ 255], 60.00th=[ 262], 00:10:17.218 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 297], 00:10:17.218 | 99.00th=[ 314], 99.50th=[ 326], 99.90th=[ 570], 99.95th=[ 1090], 00:10:17.218 | 99.99th=[ 2507] 00:10:17.218 bw ( KiB/s): min=13904, max=22255, per=23.47%, avg=15428.43, stdev=3029.52, samples=7 00:10:17.218 iops : min= 3476, max= 5563, avg=3857.00, stdev=757.10, samples=7 00:10:17.218 lat (usec) : 250=43.58%, 500=56.31%, 750=0.04%, 1000=0.01% 00:10:17.218 lat (msec) : 2=0.04%, 4=0.02% 00:10:17.218 cpu : usr=1.10%, sys=5.13%, ctx=14928, majf=0, minf=2 00:10:17.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.218 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.218 issued rwts: total=14914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.218 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66899: Wed Nov 20 15:58:15 2024 00:10:17.218 read: IOPS=5346, BW=20.9MiB/s (21.9MB/s)(66.6MiB/3187msec) 00:10:17.218 slat (usec): min=10, max=9393, avg=13.31, stdev=93.57 00:10:17.218 clat (usec): min=138, max=1877, avg=172.55, stdev=27.15 00:10:17.218 lat (usec): min=149, max=9568, avg=185.86, stdev=97.70 00:10:17.218 clat percentiles (usec): 00:10:17.218 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:10:17.218 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:10:17.218 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 198], 00:10:17.218 | 99.00th=[ 215], 99.50th=[ 227], 99.90th=[ 297], 99.95th=[ 529], 00:10:17.218 | 99.99th=[ 1631] 00:10:17.218 bw ( KiB/s): min=20880, max=21840, per=32.59%, avg=21421.33, stdev=402.51, samples=6 00:10:17.218 iops : min= 5220, max= 5460, avg=5355.33, stdev=100.63, samples=6 00:10:17.218 lat (usec) : 250=99.83%, 500=0.11%, 750=0.02%, 1000=0.01% 00:10:17.218 lat (msec) : 2=0.02% 00:10:17.218 cpu : usr=1.44%, sys=5.93%, ctx=17043, majf=0, minf=1 00:10:17.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.218 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.218 issued rwts: total=17040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.218 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66900: Wed Nov 20 15:58:15 2024 00:10:17.218 read: IOPS=3533, BW=13.8MiB/s (14.5MB/s)(40.4MiB/2929msec) 00:10:17.218 slat (usec): min=11, max=155, avg=13.64, stdev= 3.71 00:10:17.218 clat (usec): min=149, max=2127, avg=267.97, stdev=33.61 00:10:17.218 lat (usec): min=163, max=2153, avg=281.60, stdev=34.19 00:10:17.218 clat percentiles (usec): 00:10:17.218 | 1.00th=[ 227], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 253], 00:10:17.218 | 30.00th=[ 258], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:10:17.218 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 297], 00:10:17.218 | 99.00th=[ 326], 99.50th=[ 379], 99.90th=[ 474], 99.95th=[ 742], 00:10:17.218 | 99.99th=[ 1467] 00:10:17.218 bw ( KiB/s): min=13872, max=14392, per=21.52%, avg=14142.40, stdev=198.91, samples=5 00:10:17.218 iops : min= 3468, max= 3598, avg=3535.60, stdev=49.73, samples=5 00:10:17.218 lat (usec) : 250=14.83%, 500=85.06%, 750=0.05%, 1000=0.02% 00:10:17.218 lat (msec) : 2=0.02%, 4=0.01% 00:10:17.218 cpu : usr=1.09%, sys=3.96%, ctx=10350, majf=0, minf=2 00:10:17.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.219 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.219 issued rwts: total=10349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.219 00:10:17.219 Run status group 0 (all jobs): 00:10:17.219 READ: bw=64.2MiB/s (67.3MB/s), 13.8MiB/s-21.8MiB/s (14.5MB/s-22.8MB/s), io=240MiB (252MB), run=2929-3743msec 00:10:17.219 00:10:17.219 Disk stats (read/write): 00:10:17.219 nvme0n1: ios=18748/0, merge=0/0, ticks=3091/0, in_queue=3091, util=94.99% 00:10:17.219 nvme0n2: ios=14074/0, merge=0/0, ticks=3349/0, in_queue=3349, util=95.15% 00:10:17.219 nvme0n3: ios=16634/0, merge=0/0, ticks=2916/0, in_queue=2916, util=96.36% 00:10:17.219 nvme0n4: ios=10103/0, merge=0/0, ticks=2732/0, in_queue=2732, util=96.75% 00:10:17.219 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.219 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:17.477 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.477 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:17.735 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.735 15:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:18.326 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.326 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:18.585 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.585 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:18.843 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:18.843 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66853 00:10:18.843 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:18.843 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.843 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.843 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:18.843 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:18.843 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.843 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:18.843 15:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.843 nvmf hotplug test: fio failed as expected 00:10:18.843 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:18.843 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:18.843 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:18.843 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.102 rmmod nvme_tcp 00:10:19.102 rmmod nvme_fabrics 00:10:19.102 rmmod nvme_keyring 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 66466 ']' 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 66466 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 66466 ']' 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 66466 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.102 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66466 00:10:19.360 killing process with pid 66466 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66466' 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 66466 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 66466 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:19.360 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:19.619 00:10:19.619 real 0m20.682s 00:10:19.619 user 1m17.911s 00:10:19.619 sys 0m10.435s 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.619 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.619 ************************************ 00:10:19.619 END TEST nvmf_fio_target 00:10:19.619 ************************************ 00:10:19.878 15:58:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:19.878 15:58:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.878 15:58:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.878 15:58:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.878 ************************************ 00:10:19.878 START TEST nvmf_bdevio 00:10:19.878 ************************************ 00:10:19.878 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:19.878 * Looking for test storage... 00:10:19.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:19.878 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:19.878 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:19.878 15:58:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:19.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.878 --rc genhtml_branch_coverage=1 00:10:19.878 --rc genhtml_function_coverage=1 00:10:19.878 --rc genhtml_legend=1 00:10:19.878 --rc geninfo_all_blocks=1 00:10:19.878 --rc geninfo_unexecuted_blocks=1 00:10:19.878 00:10:19.878 ' 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:19.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.878 --rc genhtml_branch_coverage=1 00:10:19.878 --rc genhtml_function_coverage=1 00:10:19.878 --rc genhtml_legend=1 00:10:19.878 --rc geninfo_all_blocks=1 00:10:19.878 --rc geninfo_unexecuted_blocks=1 00:10:19.878 00:10:19.878 ' 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:19.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.878 --rc genhtml_branch_coverage=1 00:10:19.878 --rc genhtml_function_coverage=1 00:10:19.878 --rc genhtml_legend=1 00:10:19.878 --rc geninfo_all_blocks=1 00:10:19.878 --rc geninfo_unexecuted_blocks=1 00:10:19.878 00:10:19.878 ' 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:19.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.878 --rc genhtml_branch_coverage=1 00:10:19.878 --rc genhtml_function_coverage=1 00:10:19.878 --rc genhtml_legend=1 00:10:19.878 --rc geninfo_all_blocks=1 00:10:19.878 --rc geninfo_unexecuted_blocks=1 00:10:19.878 00:10:19.878 ' 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:19.878 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.879 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:19.879 Cannot find device "nvmf_init_br" 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:19.879 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:20.138 Cannot find device "nvmf_init_br2" 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:20.138 Cannot find device "nvmf_tgt_br" 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.138 Cannot find device "nvmf_tgt_br2" 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:20.138 Cannot find device "nvmf_init_br" 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:20.138 Cannot find device "nvmf_init_br2" 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:20.138 Cannot find device "nvmf_tgt_br" 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:20.138 Cannot find device "nvmf_tgt_br2" 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:20.138 Cannot find device "nvmf_br" 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:20.138 Cannot find device "nvmf_init_if" 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:20.138 Cannot find device "nvmf_init_if2" 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:20.138 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:20.139 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:20.139 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:20.139 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:20.139 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:20.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:20.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:10:20.398 00:10:20.398 --- 10.0.0.3 ping statistics --- 00:10:20.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.398 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:20.398 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:20.398 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:10:20.398 00:10:20.398 --- 10.0.0.4 ping statistics --- 00:10:20.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.398 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:20.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:20.398 00:10:20.398 --- 10.0.0.1 ping statistics --- 00:10:20.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.398 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:20.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:10:20.398 00:10:20.398 --- 10.0.0.2 ping statistics --- 00:10:20.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.398 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=67220 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 67220 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 67220 ']' 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.398 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.398 [2024-11-20 15:58:18.589821] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:10:20.398 [2024-11-20 15:58:18.589952] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.657 [2024-11-20 15:58:18.743801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.657 [2024-11-20 15:58:18.805085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.657 [2024-11-20 15:58:18.805150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.657 [2024-11-20 15:58:18.805165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.657 [2024-11-20 15:58:18.805176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.657 [2024-11-20 15:58:18.805185] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.657 [2024-11-20 15:58:18.806773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:20.657 [2024-11-20 15:58:18.806860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:20.657 [2024-11-20 15:58:18.806958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.657 [2024-11-20 15:58:18.806946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:20.657 [2024-11-20 15:58:18.865445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.915 [2024-11-20 15:58:18.984278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.915 15:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.915 Malloc0 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.915 [2024-11-20 15:58:19.043877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:20.915 { 00:10:20.915 "params": { 00:10:20.915 "name": "Nvme$subsystem", 00:10:20.915 "trtype": "$TEST_TRANSPORT", 00:10:20.915 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.915 "adrfam": "ipv4", 00:10:20.915 "trsvcid": "$NVMF_PORT", 00:10:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.915 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.915 "hdgst": ${hdgst:-false}, 00:10:20.915 "ddgst": ${ddgst:-false} 00:10:20.915 }, 00:10:20.915 "method": "bdev_nvme_attach_controller" 00:10:20.915 } 00:10:20.915 EOF 00:10:20.915 )") 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:20.915 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:20.915 "params": { 00:10:20.915 "name": "Nvme1", 00:10:20.915 "trtype": "tcp", 00:10:20.915 "traddr": "10.0.0.3", 00:10:20.915 "adrfam": "ipv4", 00:10:20.915 "trsvcid": "4420", 00:10:20.915 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:20.915 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:20.915 "hdgst": false, 00:10:20.915 "ddgst": false 00:10:20.915 }, 00:10:20.915 "method": "bdev_nvme_attach_controller" 00:10:20.915 }' 00:10:20.915 [2024-11-20 15:58:19.100632] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:10:20.915 [2024-11-20 15:58:19.100913] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67248 ] 00:10:21.173 [2024-11-20 15:58:19.251368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.173 [2024-11-20 15:58:19.310748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.173 [2024-11-20 15:58:19.310902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.173 [2024-11-20 15:58:19.310915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.173 [2024-11-20 15:58:19.373123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:21.431 I/O targets: 00:10:21.431 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:21.431 00:10:21.431 00:10:21.431 CUnit - A unit testing framework for C - Version 2.1-3 00:10:21.431 http://cunit.sourceforge.net/ 00:10:21.431 00:10:21.431 00:10:21.431 Suite: bdevio tests on: Nvme1n1 00:10:21.431 Test: blockdev write read block ...passed 00:10:21.431 Test: blockdev write zeroes read block ...passed 00:10:21.431 Test: blockdev write zeroes read no split ...passed 00:10:21.431 Test: blockdev write zeroes read split ...passed 00:10:21.431 Test: blockdev write zeroes read split partial ...passed 00:10:21.431 Test: blockdev reset ...[2024-11-20 15:58:19.518277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:21.431 [2024-11-20 15:58:19.518374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e26180 (9): Bad file descriptor 00:10:21.431 [2024-11-20 15:58:19.534363] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:21.431 passed 00:10:21.431 Test: blockdev write read 8 blocks ...passed 00:10:21.431 Test: blockdev write read size > 128k ...passed 00:10:21.431 Test: blockdev write read invalid size ...passed 00:10:21.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:21.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:21.431 Test: blockdev write read max offset ...passed 00:10:21.432 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:21.432 Test: blockdev writev readv 8 blocks ...passed 00:10:21.432 Test: blockdev writev readv 30 x 1block ...passed 00:10:21.432 Test: blockdev writev readv block ...passed 00:10:21.432 Test: blockdev writev readv size > 128k ...passed 00:10:21.432 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:21.432 Test: blockdev comparev and writev ...[2024-11-20 15:58:19.542414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.432 [2024-11-20 15:58:19.542474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:21.432 [2024-11-20 15:58:19.542500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.432 [2024-11-20 15:58:19.542514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:21.432 [2024-11-20 15:58:19.542985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.432 [2024-11-20 15:58:19.543021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:21.432 [2024-11-20 15:58:19.543043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.432 [2024-11-20 15:58:19.543056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:21.432 [2024-11-20 15:58:19.543340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.432 [2024-11-20 15:58:19.543375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:21.432 [2024-11-20 15:58:19.543397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.432 [2024-11-20 15:58:19.543410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:21.432 [2024-11-20 15:58:19.543842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.432 [2024-11-20 15:58:19.543876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:21.432 [2024-11-20 15:58:19.543898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.432 [2024-11-20 15:58:19.543910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:21.432 passed 00:10:21.432 Test: blockdev nvme passthru rw ...passed 00:10:21.432 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:58:19.544739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:21.432 [2024-11-20 15:58:19.544772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:21.432 [2024-11-20 15:58:19.544920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:21.432 [2024-11-20 15:58:19.544942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:21.432 [2024-11-20 15:58:19.545057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:21.432 [2024-11-20 15:58:19.545084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:21.432 [2024-11-20 15:58:19.545191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:21.432 [2024-11-20 15:58:19.545209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:21.432 passed 00:10:21.432 Test: blockdev nvme admin passthru ...passed 00:10:21.432 Test: blockdev copy ...passed 00:10:21.432 00:10:21.432 Run Summary: Type Total Ran Passed Failed Inactive 00:10:21.432 suites 1 1 n/a 0 0 00:10:21.432 tests 23 23 23 0 0 00:10:21.432 asserts 152 152 152 0 n/a 00:10:21.432 00:10:21.432 Elapsed time = 0.152 seconds 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.690 rmmod nvme_tcp 00:10:21.690 rmmod nvme_fabrics 00:10:21.690 rmmod nvme_keyring 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 67220 ']' 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 67220 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 67220 ']' 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 67220 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67220 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:21.690 killing process with pid 67220 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67220' 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 67220 00:10:21.690 15:58:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 67220 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:21.948 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:10:22.206 00:10:22.206 real 0m2.502s 00:10:22.206 user 0m6.476s 00:10:22.206 sys 0m0.836s 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:22.206 ************************************ 00:10:22.206 END TEST nvmf_bdevio 00:10:22.206 ************************************ 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:22.206 00:10:22.206 real 2m36.201s 00:10:22.206 user 6m52.185s 00:10:22.206 sys 0m52.174s 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.206 15:58:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.206 ************************************ 00:10:22.206 END TEST nvmf_target_core 00:10:22.207 ************************************ 00:10:22.466 15:58:20 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:22.466 15:58:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.466 15:58:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.466 15:58:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.466 ************************************ 00:10:22.466 START TEST nvmf_target_extra 00:10:22.466 ************************************ 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:22.466 * Looking for test storage... 00:10:22.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.466 --rc genhtml_branch_coverage=1 00:10:22.466 --rc genhtml_function_coverage=1 00:10:22.466 --rc genhtml_legend=1 00:10:22.466 --rc geninfo_all_blocks=1 00:10:22.466 --rc geninfo_unexecuted_blocks=1 00:10:22.466 00:10:22.466 ' 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.466 --rc genhtml_branch_coverage=1 00:10:22.466 --rc genhtml_function_coverage=1 00:10:22.466 --rc genhtml_legend=1 00:10:22.466 --rc geninfo_all_blocks=1 00:10:22.466 --rc geninfo_unexecuted_blocks=1 00:10:22.466 00:10:22.466 ' 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.466 --rc genhtml_branch_coverage=1 00:10:22.466 --rc genhtml_function_coverage=1 00:10:22.466 --rc genhtml_legend=1 00:10:22.466 --rc geninfo_all_blocks=1 00:10:22.466 --rc geninfo_unexecuted_blocks=1 00:10:22.466 00:10:22.466 ' 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.466 --rc genhtml_branch_coverage=1 00:10:22.466 --rc genhtml_function_coverage=1 00:10:22.466 --rc genhtml_legend=1 00:10:22.466 --rc geninfo_all_blocks=1 00:10:22.466 --rc geninfo_unexecuted_blocks=1 00:10:22.466 00:10:22.466 ' 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.466 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.467 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:22.467 ************************************ 00:10:22.467 START TEST nvmf_auth_target 00:10:22.467 ************************************ 00:10:22.467 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:22.726 * Looking for test storage... 00:10:22.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.726 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.727 --rc genhtml_branch_coverage=1 00:10:22.727 --rc genhtml_function_coverage=1 00:10:22.727 --rc genhtml_legend=1 00:10:22.727 --rc geninfo_all_blocks=1 00:10:22.727 --rc geninfo_unexecuted_blocks=1 00:10:22.727 00:10:22.727 ' 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.727 --rc genhtml_branch_coverage=1 00:10:22.727 --rc genhtml_function_coverage=1 00:10:22.727 --rc genhtml_legend=1 00:10:22.727 --rc geninfo_all_blocks=1 00:10:22.727 --rc geninfo_unexecuted_blocks=1 00:10:22.727 00:10:22.727 ' 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.727 --rc genhtml_branch_coverage=1 00:10:22.727 --rc genhtml_function_coverage=1 00:10:22.727 --rc genhtml_legend=1 00:10:22.727 --rc geninfo_all_blocks=1 00:10:22.727 --rc geninfo_unexecuted_blocks=1 00:10:22.727 00:10:22.727 ' 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.727 --rc genhtml_branch_coverage=1 00:10:22.727 --rc genhtml_function_coverage=1 00:10:22.727 --rc genhtml_legend=1 00:10:22.727 --rc geninfo_all_blocks=1 00:10:22.727 --rc geninfo_unexecuted_blocks=1 00:10:22.727 00:10:22.727 ' 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.727 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:22.727 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:22.728 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:22.728 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:22.728 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.728 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:22.728 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:22.728 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:22.728 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:22.728 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:22.728 Cannot find device "nvmf_init_br" 00:10:22.728 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:22.728 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:22.728 Cannot find device "nvmf_init_br2" 00:10:22.728 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:22.728 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:22.988 Cannot find device "nvmf_tgt_br" 00:10:22.988 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:10:22.988 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:22.988 Cannot find device "nvmf_tgt_br2" 00:10:22.988 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:10:22.988 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:22.988 Cannot find device "nvmf_init_br" 00:10:22.988 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:10:22.988 15:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:22.988 Cannot find device "nvmf_init_br2" 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:22.988 Cannot find device "nvmf_tgt_br" 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:22.988 Cannot find device "nvmf_tgt_br2" 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:22.988 Cannot find device "nvmf_br" 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:22.988 Cannot find device "nvmf_init_if" 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:22.988 Cannot find device "nvmf_init_if2" 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:22.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:22.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:22.988 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:23.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:23.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:10:23.247 00:10:23.247 --- 10.0.0.3 ping statistics --- 00:10:23.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.247 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:10:23.247 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:23.247 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:23.247 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:10:23.247 00:10:23.247 --- 10.0.0.4 ping statistics --- 00:10:23.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.248 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:23.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:10:23.248 00:10:23.248 --- 10.0.0.1 ping statistics --- 00:10:23.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.248 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:23.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:10:23.248 00:10:23.248 --- 10.0.0.2 ping statistics --- 00:10:23.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.248 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=67536 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 67536 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67536 ']' 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.248 15:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=67568 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=710cbca79b1170b62db6281d0f9573739eab6c34454639dc 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.tlQ 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 710cbca79b1170b62db6281d0f9573739eab6c34454639dc 0 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 710cbca79b1170b62db6281d0f9573739eab6c34454639dc 0 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=710cbca79b1170b62db6281d0f9573739eab6c34454639dc 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.tlQ 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.tlQ 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.tlQ 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=950f86702bf2bd48fabbfcc4b8bf13cd796aca5c36e34e0791822bb5d9ae8bd3 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.yWm 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 950f86702bf2bd48fabbfcc4b8bf13cd796aca5c36e34e0791822bb5d9ae8bd3 3 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 950f86702bf2bd48fabbfcc4b8bf13cd796aca5c36e34e0791822bb5d9ae8bd3 3 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=950f86702bf2bd48fabbfcc4b8bf13cd796aca5c36e34e0791822bb5d9ae8bd3 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.yWm 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.yWm 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.yWm 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8fb105cccf8c7ccf370ae1be986679b3 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.rmU 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8fb105cccf8c7ccf370ae1be986679b3 1 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8fb105cccf8c7ccf370ae1be986679b3 1 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8fb105cccf8c7ccf370ae1be986679b3 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.rmU 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.rmU 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.rmU 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b87baea27d26860e1c8d2957b687b7965195e8e62498779b 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qQf 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b87baea27d26860e1c8d2957b687b7965195e8e62498779b 2 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b87baea27d26860e1c8d2957b687b7965195e8e62498779b 2 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b87baea27d26860e1c8d2957b687b7965195e8e62498779b 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qQf 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qQf 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.qQf 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:24.624 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6b787edf986762032dae76ed59e72419744ef94948ce3035 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Og0 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6b787edf986762032dae76ed59e72419744ef94948ce3035 2 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6b787edf986762032dae76ed59e72419744ef94948ce3035 2 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6b787edf986762032dae76ed59e72419744ef94948ce3035 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Og0 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Og0 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Og0 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=509d70858a31de9ca62b28e32cef8d56 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3Yg 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 509d70858a31de9ca62b28e32cef8d56 1 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 509d70858a31de9ca62b28e32cef8d56 1 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=509d70858a31de9ca62b28e32cef8d56 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3Yg 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3Yg 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.3Yg 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:24.625 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6b08a0a0b9b1c0838a7663c8ccec5d3c91d71635dc113e6025d7b4d75dc9feed 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.fTI 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6b08a0a0b9b1c0838a7663c8ccec5d3c91d71635dc113e6025d7b4d75dc9feed 3 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6b08a0a0b9b1c0838a7663c8ccec5d3c91d71635dc113e6025d7b4d75dc9feed 3 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6b08a0a0b9b1c0838a7663c8ccec5d3c91d71635dc113e6025d7b4d75dc9feed 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.fTI 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.fTI 00:10:24.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.fTI 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 67536 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67536 ']' 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.884 15:58:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:25.143 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.143 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:25.143 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 67568 /var/tmp/host.sock 00:10:25.143 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 67568 ']' 00:10:25.143 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:10:25.143 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.143 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:25.143 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.143 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tlQ 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.tlQ 00:10:25.401 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.tlQ 00:10:25.966 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.yWm ]] 00:10:25.966 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yWm 00:10:25.966 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.967 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.967 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.967 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yWm 00:10:25.967 15:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yWm 00:10:26.226 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:26.226 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rmU 00:10:26.226 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.226 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.226 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.226 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.rmU 00:10:26.226 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.rmU 00:10:26.484 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.qQf ]] 00:10:26.484 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qQf 00:10:26.484 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.484 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.484 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.484 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qQf 00:10:26.484 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qQf 00:10:26.743 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:26.743 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Og0 00:10:26.743 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.743 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.743 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.743 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Og0 00:10:26.743 15:58:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Og0 00:10:27.000 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.3Yg ]] 00:10:27.000 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3Yg 00:10:27.000 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.000 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.000 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.000 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3Yg 00:10:27.000 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3Yg 00:10:27.258 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:10:27.258 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.fTI 00:10:27.258 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.258 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.258 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.258 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.fTI 00:10:27.258 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.fTI 00:10:27.516 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:10:27.516 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:27.516 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:27.516 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:27.516 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:27.516 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:27.774 15:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.031 00:10:28.031 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:28.031 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.031 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.289 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.289 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.289 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.289 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.289 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.289 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:28.289 { 00:10:28.289 "cntlid": 1, 00:10:28.289 "qid": 0, 00:10:28.289 "state": "enabled", 00:10:28.289 "thread": "nvmf_tgt_poll_group_000", 00:10:28.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:28.289 "listen_address": { 00:10:28.289 "trtype": "TCP", 00:10:28.289 "adrfam": "IPv4", 00:10:28.289 "traddr": "10.0.0.3", 00:10:28.289 "trsvcid": "4420" 00:10:28.289 }, 00:10:28.289 "peer_address": { 00:10:28.289 "trtype": "TCP", 00:10:28.289 "adrfam": "IPv4", 00:10:28.289 "traddr": "10.0.0.1", 00:10:28.289 "trsvcid": "44164" 00:10:28.289 }, 00:10:28.289 "auth": { 00:10:28.289 "state": "completed", 00:10:28.289 "digest": "sha256", 00:10:28.289 "dhgroup": "null" 00:10:28.289 } 00:10:28.289 } 00:10:28.289 ]' 00:10:28.289 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:28.546 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:28.546 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:28.546 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:28.546 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:28.546 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.546 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.546 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.804 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:10:28.804 15:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.083 15:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:34.083 00:10:34.083 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.083 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.083 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.340 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.340 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.340 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.340 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.340 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.340 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.340 { 00:10:34.340 "cntlid": 3, 00:10:34.340 "qid": 0, 00:10:34.340 "state": "enabled", 00:10:34.340 "thread": "nvmf_tgt_poll_group_000", 00:10:34.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:34.340 "listen_address": { 00:10:34.340 "trtype": "TCP", 00:10:34.340 "adrfam": "IPv4", 00:10:34.340 "traddr": "10.0.0.3", 00:10:34.340 "trsvcid": "4420" 00:10:34.340 }, 00:10:34.341 "peer_address": { 00:10:34.341 "trtype": "TCP", 00:10:34.341 "adrfam": "IPv4", 00:10:34.341 "traddr": "10.0.0.1", 00:10:34.341 "trsvcid": "41142" 00:10:34.341 }, 00:10:34.341 "auth": { 00:10:34.341 "state": "completed", 00:10:34.341 "digest": "sha256", 00:10:34.341 "dhgroup": "null" 00:10:34.341 } 00:10:34.341 } 00:10:34.341 ]' 00:10:34.341 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.341 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:34.341 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.341 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:34.341 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.341 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.341 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.341 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.599 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:10:34.599 15:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:10:35.530 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:35.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:35.530 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:35.530 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.530 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.530 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.530 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:35.530 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:35.530 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:35.788 15:58:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:36.045 00:10:36.045 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:36.045 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.045 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:36.303 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.303 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.303 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.303 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.303 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.303 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.303 { 00:10:36.303 "cntlid": 5, 00:10:36.303 "qid": 0, 00:10:36.303 "state": "enabled", 00:10:36.303 "thread": "nvmf_tgt_poll_group_000", 00:10:36.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:36.303 "listen_address": { 00:10:36.303 "trtype": "TCP", 00:10:36.303 "adrfam": "IPv4", 00:10:36.303 "traddr": "10.0.0.3", 00:10:36.303 "trsvcid": "4420" 00:10:36.303 }, 00:10:36.303 "peer_address": { 00:10:36.303 "trtype": "TCP", 00:10:36.303 "adrfam": "IPv4", 00:10:36.303 "traddr": "10.0.0.1", 00:10:36.303 "trsvcid": "41166" 00:10:36.303 }, 00:10:36.303 "auth": { 00:10:36.303 "state": "completed", 00:10:36.303 "digest": "sha256", 00:10:36.303 "dhgroup": "null" 00:10:36.303 } 00:10:36.303 } 00:10:36.303 ]' 00:10:36.303 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.561 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:36.561 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:36.561 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:36.561 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:36.561 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:36.561 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:36.561 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.820 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:10:36.820 15:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:10:37.754 15:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:37.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:37.754 15:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:37.754 15:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.754 15:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.754 15:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.754 15:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:37.754 15:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:37.754 15:58:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:38.013 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:38.272 00:10:38.272 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:38.272 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.272 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:38.529 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.529 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.529 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.529 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.529 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.529 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:38.529 { 00:10:38.529 "cntlid": 7, 00:10:38.529 "qid": 0, 00:10:38.529 "state": "enabled", 00:10:38.529 "thread": "nvmf_tgt_poll_group_000", 00:10:38.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:38.529 "listen_address": { 00:10:38.529 "trtype": "TCP", 00:10:38.529 "adrfam": "IPv4", 00:10:38.529 "traddr": "10.0.0.3", 00:10:38.529 "trsvcid": "4420" 00:10:38.529 }, 00:10:38.529 "peer_address": { 00:10:38.529 "trtype": "TCP", 00:10:38.529 "adrfam": "IPv4", 00:10:38.529 "traddr": "10.0.0.1", 00:10:38.529 "trsvcid": "57946" 00:10:38.529 }, 00:10:38.529 "auth": { 00:10:38.529 "state": "completed", 00:10:38.529 "digest": "sha256", 00:10:38.529 "dhgroup": "null" 00:10:38.529 } 00:10:38.529 } 00:10:38.529 ]' 00:10:38.529 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.787 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:38.787 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:38.787 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:38.787 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:38.787 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.787 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.787 15:58:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.045 15:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:10:39.045 15:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:10:39.991 15:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:39.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:39.991 15:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:39.991 15:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.991 15:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.991 15:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.992 15:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:39.992 15:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:39.992 15:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:39.992 15:58:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:39.992 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:40.575 00:10:40.575 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.575 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.575 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.833 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:40.833 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:40.833 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.833 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.833 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.833 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:40.833 { 00:10:40.833 "cntlid": 9, 00:10:40.833 "qid": 0, 00:10:40.833 "state": "enabled", 00:10:40.833 "thread": "nvmf_tgt_poll_group_000", 00:10:40.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:40.833 "listen_address": { 00:10:40.833 "trtype": "TCP", 00:10:40.833 "adrfam": "IPv4", 00:10:40.833 "traddr": "10.0.0.3", 00:10:40.833 "trsvcid": "4420" 00:10:40.833 }, 00:10:40.833 "peer_address": { 00:10:40.833 "trtype": "TCP", 00:10:40.833 "adrfam": "IPv4", 00:10:40.833 "traddr": "10.0.0.1", 00:10:40.833 "trsvcid": "57984" 00:10:40.833 }, 00:10:40.833 "auth": { 00:10:40.833 "state": "completed", 00:10:40.833 "digest": "sha256", 00:10:40.833 "dhgroup": "ffdhe2048" 00:10:40.833 } 00:10:40.833 } 00:10:40.833 ]' 00:10:40.833 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:40.833 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.833 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:40.833 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:40.833 15:58:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:40.833 15:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.833 15:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.833 15:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.098 15:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:10:41.098 15:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:10:42.031 15:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.031 15:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:42.031 15:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.031 15:58:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.031 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.031 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.031 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:42.031 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:42.291 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:10:42.291 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.291 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:42.291 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:42.291 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:42.291 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.292 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.292 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.292 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.292 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.292 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.292 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.292 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:42.549 00:10:42.549 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.549 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.549 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.807 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:42.807 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:42.807 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.807 15:58:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.807 15:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.807 15:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:42.807 { 00:10:42.807 "cntlid": 11, 00:10:42.807 "qid": 0, 00:10:42.807 "state": "enabled", 00:10:42.807 "thread": "nvmf_tgt_poll_group_000", 00:10:42.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:42.807 "listen_address": { 00:10:42.807 "trtype": "TCP", 00:10:42.807 "adrfam": "IPv4", 00:10:42.807 "traddr": "10.0.0.3", 00:10:42.807 "trsvcid": "4420" 00:10:42.807 }, 00:10:42.807 "peer_address": { 00:10:42.807 "trtype": "TCP", 00:10:42.807 "adrfam": "IPv4", 00:10:42.807 "traddr": "10.0.0.1", 00:10:42.807 "trsvcid": "58006" 00:10:42.807 }, 00:10:42.807 "auth": { 00:10:42.807 "state": "completed", 00:10:42.807 "digest": "sha256", 00:10:42.807 "dhgroup": "ffdhe2048" 00:10:42.807 } 00:10:42.807 } 00:10:42.807 ]' 00:10:42.807 15:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.063 15:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.063 15:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.063 15:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:43.063 15:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.063 15:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.063 15:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.063 15:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.319 15:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:10:43.319 15:58:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:10:43.884 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.884 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:43.884 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.884 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.884 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.884 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:43.884 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:43.884 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.451 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:44.709 00:10:44.709 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:44.709 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.709 15:58:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.274 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.274 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.274 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.274 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.275 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.275 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.275 { 00:10:45.275 "cntlid": 13, 00:10:45.275 "qid": 0, 00:10:45.275 "state": "enabled", 00:10:45.275 "thread": "nvmf_tgt_poll_group_000", 00:10:45.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:45.275 "listen_address": { 00:10:45.275 "trtype": "TCP", 00:10:45.275 "adrfam": "IPv4", 00:10:45.275 "traddr": "10.0.0.3", 00:10:45.275 "trsvcid": "4420" 00:10:45.275 }, 00:10:45.275 "peer_address": { 00:10:45.275 "trtype": "TCP", 00:10:45.275 "adrfam": "IPv4", 00:10:45.275 "traddr": "10.0.0.1", 00:10:45.275 "trsvcid": "58028" 00:10:45.275 }, 00:10:45.275 "auth": { 00:10:45.275 "state": "completed", 00:10:45.275 "digest": "sha256", 00:10:45.275 "dhgroup": "ffdhe2048" 00:10:45.275 } 00:10:45.275 } 00:10:45.275 ]' 00:10:45.275 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.275 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.275 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.275 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:45.275 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.275 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.275 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.275 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.534 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:10:45.534 15:58:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:10:46.098 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.099 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:46.099 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.099 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.099 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.099 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.099 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:46.099 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:46.356 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:10:46.356 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.356 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:46.356 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:46.356 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:46.356 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.356 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:10:46.356 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.356 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.613 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.613 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:46.613 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:46.613 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:46.870 00:10:46.871 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.871 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.871 15:58:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.127 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.127 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.127 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.127 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.127 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.127 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:47.127 { 00:10:47.127 "cntlid": 15, 00:10:47.127 "qid": 0, 00:10:47.127 "state": "enabled", 00:10:47.127 "thread": "nvmf_tgt_poll_group_000", 00:10:47.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:47.127 "listen_address": { 00:10:47.127 "trtype": "TCP", 00:10:47.127 "adrfam": "IPv4", 00:10:47.127 "traddr": "10.0.0.3", 00:10:47.127 "trsvcid": "4420" 00:10:47.127 }, 00:10:47.127 "peer_address": { 00:10:47.127 "trtype": "TCP", 00:10:47.127 "adrfam": "IPv4", 00:10:47.127 "traddr": "10.0.0.1", 00:10:47.127 "trsvcid": "58064" 00:10:47.127 }, 00:10:47.127 "auth": { 00:10:47.127 "state": "completed", 00:10:47.127 "digest": "sha256", 00:10:47.127 "dhgroup": "ffdhe2048" 00:10:47.127 } 00:10:47.127 } 00:10:47.127 ]' 00:10:47.127 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.127 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:47.127 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.385 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:47.385 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.385 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.385 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.385 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.643 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:10:47.643 15:58:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:10:48.209 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.209 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:48.209 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.209 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.209 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.209 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:48.209 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.209 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:48.209 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:48.473 15:58:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:49.037 00:10:49.037 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:49.037 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.037 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:49.295 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.295 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.295 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.295 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.295 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.295 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.295 { 00:10:49.295 "cntlid": 17, 00:10:49.295 "qid": 0, 00:10:49.295 "state": "enabled", 00:10:49.295 "thread": "nvmf_tgt_poll_group_000", 00:10:49.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:49.295 "listen_address": { 00:10:49.295 "trtype": "TCP", 00:10:49.295 "adrfam": "IPv4", 00:10:49.295 "traddr": "10.0.0.3", 00:10:49.295 "trsvcid": "4420" 00:10:49.295 }, 00:10:49.295 "peer_address": { 00:10:49.295 "trtype": "TCP", 00:10:49.295 "adrfam": "IPv4", 00:10:49.295 "traddr": "10.0.0.1", 00:10:49.295 "trsvcid": "54044" 00:10:49.295 }, 00:10:49.295 "auth": { 00:10:49.295 "state": "completed", 00:10:49.295 "digest": "sha256", 00:10:49.295 "dhgroup": "ffdhe3072" 00:10:49.295 } 00:10:49.295 } 00:10:49.295 ]' 00:10:49.295 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.295 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.295 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.295 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:49.553 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.553 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.553 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.553 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.811 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:10:49.811 15:58:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:10:50.376 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.376 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:50.376 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.376 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.376 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.376 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.376 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:50.376 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.634 15:58:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:50.892 00:10:50.892 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:50.892 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.892 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.149 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.149 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.149 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.149 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.407 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.407 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.407 { 00:10:51.407 "cntlid": 19, 00:10:51.407 "qid": 0, 00:10:51.407 "state": "enabled", 00:10:51.407 "thread": "nvmf_tgt_poll_group_000", 00:10:51.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:51.407 "listen_address": { 00:10:51.407 "trtype": "TCP", 00:10:51.407 "adrfam": "IPv4", 00:10:51.407 "traddr": "10.0.0.3", 00:10:51.407 "trsvcid": "4420" 00:10:51.407 }, 00:10:51.407 "peer_address": { 00:10:51.407 "trtype": "TCP", 00:10:51.407 "adrfam": "IPv4", 00:10:51.407 "traddr": "10.0.0.1", 00:10:51.407 "trsvcid": "54080" 00:10:51.407 }, 00:10:51.407 "auth": { 00:10:51.407 "state": "completed", 00:10:51.407 "digest": "sha256", 00:10:51.407 "dhgroup": "ffdhe3072" 00:10:51.407 } 00:10:51.407 } 00:10:51.407 ]' 00:10:51.407 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.407 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.407 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.407 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:51.407 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.407 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.407 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.407 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.665 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:10:51.665 15:58:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:10:52.231 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.231 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:52.231 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.231 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.231 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.231 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.231 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:52.231 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:52.489 15:58:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:53.054 00:10:53.054 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.054 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.054 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.311 { 00:10:53.311 "cntlid": 21, 00:10:53.311 "qid": 0, 00:10:53.311 "state": "enabled", 00:10:53.311 "thread": "nvmf_tgt_poll_group_000", 00:10:53.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:53.311 "listen_address": { 00:10:53.311 "trtype": "TCP", 00:10:53.311 "adrfam": "IPv4", 00:10:53.311 "traddr": "10.0.0.3", 00:10:53.311 "trsvcid": "4420" 00:10:53.311 }, 00:10:53.311 "peer_address": { 00:10:53.311 "trtype": "TCP", 00:10:53.311 "adrfam": "IPv4", 00:10:53.311 "traddr": "10.0.0.1", 00:10:53.311 "trsvcid": "54110" 00:10:53.311 }, 00:10:53.311 "auth": { 00:10:53.311 "state": "completed", 00:10:53.311 "digest": "sha256", 00:10:53.311 "dhgroup": "ffdhe3072" 00:10:53.311 } 00:10:53.311 } 00:10:53.311 ]' 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.311 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.569 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:10:53.569 15:58:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:54.504 15:58:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:55.070 00:10:55.070 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.070 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.070 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.329 { 00:10:55.329 "cntlid": 23, 00:10:55.329 "qid": 0, 00:10:55.329 "state": "enabled", 00:10:55.329 "thread": "nvmf_tgt_poll_group_000", 00:10:55.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:55.329 "listen_address": { 00:10:55.329 "trtype": "TCP", 00:10:55.329 "adrfam": "IPv4", 00:10:55.329 "traddr": "10.0.0.3", 00:10:55.329 "trsvcid": "4420" 00:10:55.329 }, 00:10:55.329 "peer_address": { 00:10:55.329 "trtype": "TCP", 00:10:55.329 "adrfam": "IPv4", 00:10:55.329 "traddr": "10.0.0.1", 00:10:55.329 "trsvcid": "54134" 00:10:55.329 }, 00:10:55.329 "auth": { 00:10:55.329 "state": "completed", 00:10:55.329 "digest": "sha256", 00:10:55.329 "dhgroup": "ffdhe3072" 00:10:55.329 } 00:10:55.329 } 00:10:55.329 ]' 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.329 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.895 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:10:55.895 15:58:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:10:56.459 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.459 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:56.459 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.459 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.459 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.459 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:56.459 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:56.459 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:56.459 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.733 15:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:56.989 00:10:56.989 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:56.989 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:56.989 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.246 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.246 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.246 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.246 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.246 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.246 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.246 { 00:10:57.246 "cntlid": 25, 00:10:57.246 "qid": 0, 00:10:57.246 "state": "enabled", 00:10:57.246 "thread": "nvmf_tgt_poll_group_000", 00:10:57.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:57.246 "listen_address": { 00:10:57.246 "trtype": "TCP", 00:10:57.246 "adrfam": "IPv4", 00:10:57.246 "traddr": "10.0.0.3", 00:10:57.246 "trsvcid": "4420" 00:10:57.246 }, 00:10:57.246 "peer_address": { 00:10:57.246 "trtype": "TCP", 00:10:57.246 "adrfam": "IPv4", 00:10:57.246 "traddr": "10.0.0.1", 00:10:57.246 "trsvcid": "54168" 00:10:57.246 }, 00:10:57.246 "auth": { 00:10:57.246 "state": "completed", 00:10:57.246 "digest": "sha256", 00:10:57.246 "dhgroup": "ffdhe4096" 00:10:57.246 } 00:10:57.246 } 00:10:57.246 ]' 00:10:57.246 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:57.246 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.246 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:57.503 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:57.503 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:57.503 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.503 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.503 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.762 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:10:57.762 15:58:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:10:58.335 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.335 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:10:58.335 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.335 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.335 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.335 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:58.335 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:58.335 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:58.900 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:10:58.900 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:58.900 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:58.900 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:58.900 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:58.901 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.901 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.901 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.901 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.901 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.901 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.901 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:58.901 15:58:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:59.157 00:10:59.157 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.157 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.157 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.414 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.414 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.414 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.414 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.414 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.414 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:59.414 { 00:10:59.414 "cntlid": 27, 00:10:59.414 "qid": 0, 00:10:59.414 "state": "enabled", 00:10:59.414 "thread": "nvmf_tgt_poll_group_000", 00:10:59.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:10:59.414 "listen_address": { 00:10:59.414 "trtype": "TCP", 00:10:59.414 "adrfam": "IPv4", 00:10:59.414 "traddr": "10.0.0.3", 00:10:59.414 "trsvcid": "4420" 00:10:59.414 }, 00:10:59.414 "peer_address": { 00:10:59.414 "trtype": "TCP", 00:10:59.414 "adrfam": "IPv4", 00:10:59.414 "traddr": "10.0.0.1", 00:10:59.414 "trsvcid": "54860" 00:10:59.414 }, 00:10:59.414 "auth": { 00:10:59.414 "state": "completed", 00:10:59.414 "digest": "sha256", 00:10:59.414 "dhgroup": "ffdhe4096" 00:10:59.414 } 00:10:59.414 } 00:10:59.414 ]' 00:10:59.414 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:59.414 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.414 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:59.414 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:59.414 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:59.670 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.670 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.670 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.928 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:10:59.928 15:58:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:00.495 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.495 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:00.495 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.495 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.495 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.495 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:00.495 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:00.495 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:00.753 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:00.753 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:00.753 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:00.753 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:00.753 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:00.753 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.754 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.754 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.754 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.754 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.754 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.754 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:00.754 15:58:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:01.333 00:11:01.333 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:01.333 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.333 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:01.594 { 00:11:01.594 "cntlid": 29, 00:11:01.594 "qid": 0, 00:11:01.594 "state": "enabled", 00:11:01.594 "thread": "nvmf_tgt_poll_group_000", 00:11:01.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:01.594 "listen_address": { 00:11:01.594 "trtype": "TCP", 00:11:01.594 "adrfam": "IPv4", 00:11:01.594 "traddr": "10.0.0.3", 00:11:01.594 "trsvcid": "4420" 00:11:01.594 }, 00:11:01.594 "peer_address": { 00:11:01.594 "trtype": "TCP", 00:11:01.594 "adrfam": "IPv4", 00:11:01.594 "traddr": "10.0.0.1", 00:11:01.594 "trsvcid": "54894" 00:11:01.594 }, 00:11:01.594 "auth": { 00:11:01.594 "state": "completed", 00:11:01.594 "digest": "sha256", 00:11:01.594 "dhgroup": "ffdhe4096" 00:11:01.594 } 00:11:01.594 } 00:11:01.594 ]' 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.594 15:58:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.852 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:01.852 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.785 15:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.785 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.785 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:02.785 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:02.785 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:03.350 00:11:03.350 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:03.350 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:03.350 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.607 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.607 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.607 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.607 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.607 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.607 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:03.607 { 00:11:03.607 "cntlid": 31, 00:11:03.607 "qid": 0, 00:11:03.607 "state": "enabled", 00:11:03.607 "thread": "nvmf_tgt_poll_group_000", 00:11:03.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:03.607 "listen_address": { 00:11:03.607 "trtype": "TCP", 00:11:03.607 "adrfam": "IPv4", 00:11:03.607 "traddr": "10.0.0.3", 00:11:03.607 "trsvcid": "4420" 00:11:03.607 }, 00:11:03.607 "peer_address": { 00:11:03.607 "trtype": "TCP", 00:11:03.607 "adrfam": "IPv4", 00:11:03.607 "traddr": "10.0.0.1", 00:11:03.607 "trsvcid": "54912" 00:11:03.608 }, 00:11:03.608 "auth": { 00:11:03.608 "state": "completed", 00:11:03.608 "digest": "sha256", 00:11:03.608 "dhgroup": "ffdhe4096" 00:11:03.608 } 00:11:03.608 } 00:11:03.608 ]' 00:11:03.608 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:03.608 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.608 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:03.608 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:03.608 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:03.608 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.608 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.608 15:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.174 15:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:04.174 15:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:04.740 15:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.740 15:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:04.740 15:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.740 15:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.740 15:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.740 15:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:04.740 15:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:04.740 15:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:04.740 15:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.998 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:05.256 00:11:05.517 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:05.517 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:05.517 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:05.780 { 00:11:05.780 "cntlid": 33, 00:11:05.780 "qid": 0, 00:11:05.780 "state": "enabled", 00:11:05.780 "thread": "nvmf_tgt_poll_group_000", 00:11:05.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:05.780 "listen_address": { 00:11:05.780 "trtype": "TCP", 00:11:05.780 "adrfam": "IPv4", 00:11:05.780 "traddr": "10.0.0.3", 00:11:05.780 "trsvcid": "4420" 00:11:05.780 }, 00:11:05.780 "peer_address": { 00:11:05.780 "trtype": "TCP", 00:11:05.780 "adrfam": "IPv4", 00:11:05.780 "traddr": "10.0.0.1", 00:11:05.780 "trsvcid": "54944" 00:11:05.780 }, 00:11:05.780 "auth": { 00:11:05.780 "state": "completed", 00:11:05.780 "digest": "sha256", 00:11:05.780 "dhgroup": "ffdhe6144" 00:11:05.780 } 00:11:05.780 } 00:11:05.780 ]' 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.780 15:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.347 15:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:06.347 15:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:06.914 15:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.914 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:06.914 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.914 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.914 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.914 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.914 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:06.914 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.174 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.739 00:11:07.739 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.739 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.739 15:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.996 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.996 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.996 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.996 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.996 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.997 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.997 { 00:11:07.997 "cntlid": 35, 00:11:07.997 "qid": 0, 00:11:07.997 "state": "enabled", 00:11:07.997 "thread": "nvmf_tgt_poll_group_000", 00:11:07.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:07.997 "listen_address": { 00:11:07.997 "trtype": "TCP", 00:11:07.997 "adrfam": "IPv4", 00:11:07.997 "traddr": "10.0.0.3", 00:11:07.997 "trsvcid": "4420" 00:11:07.997 }, 00:11:07.997 "peer_address": { 00:11:07.997 "trtype": "TCP", 00:11:07.997 "adrfam": "IPv4", 00:11:07.997 "traddr": "10.0.0.1", 00:11:07.997 "trsvcid": "54968" 00:11:07.997 }, 00:11:07.997 "auth": { 00:11:07.997 "state": "completed", 00:11:07.997 "digest": "sha256", 00:11:07.997 "dhgroup": "ffdhe6144" 00:11:07.997 } 00:11:07.997 } 00:11:07.997 ]' 00:11:07.997 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.997 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.997 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.997 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:07.997 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:08.254 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.254 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.254 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.512 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:08.512 15:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:09.077 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.333 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:09.333 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.333 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.333 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.333 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:09.333 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:09.333 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:09.590 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:09.590 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:09.590 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:09.591 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:09.591 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:09.591 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.591 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.591 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.591 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.591 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.591 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.591 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.591 15:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.848 00:11:09.848 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:09.848 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.848 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.413 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.413 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.413 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.413 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.413 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.413 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.413 { 00:11:10.413 "cntlid": 37, 00:11:10.413 "qid": 0, 00:11:10.413 "state": "enabled", 00:11:10.413 "thread": "nvmf_tgt_poll_group_000", 00:11:10.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:10.413 "listen_address": { 00:11:10.413 "trtype": "TCP", 00:11:10.413 "adrfam": "IPv4", 00:11:10.413 "traddr": "10.0.0.3", 00:11:10.413 "trsvcid": "4420" 00:11:10.413 }, 00:11:10.413 "peer_address": { 00:11:10.413 "trtype": "TCP", 00:11:10.413 "adrfam": "IPv4", 00:11:10.413 "traddr": "10.0.0.1", 00:11:10.413 "trsvcid": "48668" 00:11:10.413 }, 00:11:10.413 "auth": { 00:11:10.413 "state": "completed", 00:11:10.413 "digest": "sha256", 00:11:10.413 "dhgroup": "ffdhe6144" 00:11:10.413 } 00:11:10.413 } 00:11:10.413 ]' 00:11:10.413 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.413 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.413 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.413 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:10.413 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.414 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.414 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.414 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.672 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:10.672 15:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:11.605 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.605 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:11.605 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.605 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.605 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.605 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.605 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:11.605 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:11.863 15:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:12.429 00:11:12.429 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.429 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.429 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.687 { 00:11:12.687 "cntlid": 39, 00:11:12.687 "qid": 0, 00:11:12.687 "state": "enabled", 00:11:12.687 "thread": "nvmf_tgt_poll_group_000", 00:11:12.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:12.687 "listen_address": { 00:11:12.687 "trtype": "TCP", 00:11:12.687 "adrfam": "IPv4", 00:11:12.687 "traddr": "10.0.0.3", 00:11:12.687 "trsvcid": "4420" 00:11:12.687 }, 00:11:12.687 "peer_address": { 00:11:12.687 "trtype": "TCP", 00:11:12.687 "adrfam": "IPv4", 00:11:12.687 "traddr": "10.0.0.1", 00:11:12.687 "trsvcid": "48696" 00:11:12.687 }, 00:11:12.687 "auth": { 00:11:12.687 "state": "completed", 00:11:12.687 "digest": "sha256", 00:11:12.687 "dhgroup": "ffdhe6144" 00:11:12.687 } 00:11:12.687 } 00:11:12.687 ]' 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.687 15:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.945 15:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:12.946 15:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:13.512 15:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.512 15:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:13.512 15:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.512 15:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.770 15:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.770 15:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.770 15:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.770 15:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:13.770 15:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.028 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.710 00:11:14.710 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.710 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.710 15:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.969 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.969 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.969 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.969 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.969 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.969 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.969 { 00:11:14.969 "cntlid": 41, 00:11:14.969 "qid": 0, 00:11:14.969 "state": "enabled", 00:11:14.969 "thread": "nvmf_tgt_poll_group_000", 00:11:14.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:14.969 "listen_address": { 00:11:14.969 "trtype": "TCP", 00:11:14.969 "adrfam": "IPv4", 00:11:14.969 "traddr": "10.0.0.3", 00:11:14.969 "trsvcid": "4420" 00:11:14.969 }, 00:11:14.969 "peer_address": { 00:11:14.969 "trtype": "TCP", 00:11:14.969 "adrfam": "IPv4", 00:11:14.969 "traddr": "10.0.0.1", 00:11:14.969 "trsvcid": "48732" 00:11:14.969 }, 00:11:14.969 "auth": { 00:11:14.969 "state": "completed", 00:11:14.969 "digest": "sha256", 00:11:14.969 "dhgroup": "ffdhe8192" 00:11:14.969 } 00:11:14.969 } 00:11:14.969 ]' 00:11:14.969 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.969 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.969 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.969 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:14.969 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.228 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.228 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.228 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.486 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:15.486 15:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:16.051 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:16.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:16.051 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:16.051 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.051 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.051 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.051 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:16.051 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:16.051 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:16.308 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:16.308 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.308 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:16.308 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:16.308 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:16.309 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.309 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.309 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.309 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.309 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.309 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.309 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.309 15:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:17.242 00:11:17.242 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:17.242 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.242 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.242 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:17.242 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:17.242 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.242 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.501 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.501 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:17.501 { 00:11:17.501 "cntlid": 43, 00:11:17.501 "qid": 0, 00:11:17.501 "state": "enabled", 00:11:17.501 "thread": "nvmf_tgt_poll_group_000", 00:11:17.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:17.501 "listen_address": { 00:11:17.501 "trtype": "TCP", 00:11:17.501 "adrfam": "IPv4", 00:11:17.501 "traddr": "10.0.0.3", 00:11:17.501 "trsvcid": "4420" 00:11:17.501 }, 00:11:17.501 "peer_address": { 00:11:17.501 "trtype": "TCP", 00:11:17.501 "adrfam": "IPv4", 00:11:17.501 "traddr": "10.0.0.1", 00:11:17.501 "trsvcid": "48754" 00:11:17.501 }, 00:11:17.501 "auth": { 00:11:17.501 "state": "completed", 00:11:17.501 "digest": "sha256", 00:11:17.501 "dhgroup": "ffdhe8192" 00:11:17.501 } 00:11:17.501 } 00:11:17.501 ]' 00:11:17.501 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:17.501 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:17.501 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:17.501 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:17.501 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:17.501 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:17.501 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:17.501 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.759 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:17.759 15:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:18.693 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:18.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:18.693 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:18.693 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.693 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.693 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.693 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:18.693 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:18.693 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.951 15:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:19.519 00:11:19.519 15:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:19.519 15:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:19.519 15:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:19.778 15:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.778 15:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.778 15:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.778 15:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.778 15:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.778 15:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:19.778 { 00:11:19.778 "cntlid": 45, 00:11:19.778 "qid": 0, 00:11:19.778 "state": "enabled", 00:11:19.778 "thread": "nvmf_tgt_poll_group_000", 00:11:19.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:19.778 "listen_address": { 00:11:19.778 "trtype": "TCP", 00:11:19.778 "adrfam": "IPv4", 00:11:19.778 "traddr": "10.0.0.3", 00:11:19.778 "trsvcid": "4420" 00:11:19.778 }, 00:11:19.778 "peer_address": { 00:11:19.778 "trtype": "TCP", 00:11:19.778 "adrfam": "IPv4", 00:11:19.778 "traddr": "10.0.0.1", 00:11:19.778 "trsvcid": "55270" 00:11:19.778 }, 00:11:19.778 "auth": { 00:11:19.778 "state": "completed", 00:11:19.778 "digest": "sha256", 00:11:19.778 "dhgroup": "ffdhe8192" 00:11:19.778 } 00:11:19.778 } 00:11:19.778 ]' 00:11:19.778 15:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.036 15:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.036 15:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.036 15:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:20.036 15:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.036 15:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.036 15:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.036 15:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.295 15:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:20.295 15:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:21.227 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.228 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:21.228 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.228 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.228 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.228 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.228 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:21.228 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:21.485 15:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:22.051 00:11:22.051 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.051 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.051 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.366 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.366 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.366 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.366 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.366 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.366 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.366 { 00:11:22.366 "cntlid": 47, 00:11:22.366 "qid": 0, 00:11:22.366 "state": "enabled", 00:11:22.366 "thread": "nvmf_tgt_poll_group_000", 00:11:22.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:22.366 "listen_address": { 00:11:22.366 "trtype": "TCP", 00:11:22.367 "adrfam": "IPv4", 00:11:22.367 "traddr": "10.0.0.3", 00:11:22.367 "trsvcid": "4420" 00:11:22.367 }, 00:11:22.367 "peer_address": { 00:11:22.367 "trtype": "TCP", 00:11:22.367 "adrfam": "IPv4", 00:11:22.367 "traddr": "10.0.0.1", 00:11:22.367 "trsvcid": "55300" 00:11:22.367 }, 00:11:22.367 "auth": { 00:11:22.367 "state": "completed", 00:11:22.367 "digest": "sha256", 00:11:22.367 "dhgroup": "ffdhe8192" 00:11:22.367 } 00:11:22.367 } 00:11:22.367 ]' 00:11:22.367 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.367 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.367 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.367 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:22.367 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.625 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.625 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.625 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.883 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:22.883 15:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:23.815 15:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:24.073 00:11:24.073 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.073 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.073 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.349 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.349 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.349 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.349 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.349 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.607 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.607 { 00:11:24.607 "cntlid": 49, 00:11:24.607 "qid": 0, 00:11:24.607 "state": "enabled", 00:11:24.607 "thread": "nvmf_tgt_poll_group_000", 00:11:24.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:24.607 "listen_address": { 00:11:24.607 "trtype": "TCP", 00:11:24.607 "adrfam": "IPv4", 00:11:24.607 "traddr": "10.0.0.3", 00:11:24.607 "trsvcid": "4420" 00:11:24.607 }, 00:11:24.607 "peer_address": { 00:11:24.607 "trtype": "TCP", 00:11:24.607 "adrfam": "IPv4", 00:11:24.607 "traddr": "10.0.0.1", 00:11:24.607 "trsvcid": "55334" 00:11:24.607 }, 00:11:24.607 "auth": { 00:11:24.607 "state": "completed", 00:11:24.607 "digest": "sha384", 00:11:24.607 "dhgroup": "null" 00:11:24.607 } 00:11:24.607 } 00:11:24.607 ]' 00:11:24.607 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.607 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:24.607 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.607 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:24.607 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.607 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.607 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.607 15:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.865 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:24.865 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:25.432 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.432 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:25.432 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.432 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.432 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.432 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.432 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:25.432 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:25.691 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:11:25.691 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.691 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:25.691 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:25.691 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:25.691 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.691 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.691 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.691 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.949 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.949 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.949 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.949 15:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:26.207 00:11:26.207 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.207 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.207 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.465 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.465 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.465 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.465 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.465 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.465 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.465 { 00:11:26.465 "cntlid": 51, 00:11:26.465 "qid": 0, 00:11:26.465 "state": "enabled", 00:11:26.465 "thread": "nvmf_tgt_poll_group_000", 00:11:26.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:26.465 "listen_address": { 00:11:26.465 "trtype": "TCP", 00:11:26.465 "adrfam": "IPv4", 00:11:26.465 "traddr": "10.0.0.3", 00:11:26.465 "trsvcid": "4420" 00:11:26.465 }, 00:11:26.465 "peer_address": { 00:11:26.465 "trtype": "TCP", 00:11:26.465 "adrfam": "IPv4", 00:11:26.465 "traddr": "10.0.0.1", 00:11:26.465 "trsvcid": "55344" 00:11:26.465 }, 00:11:26.465 "auth": { 00:11:26.465 "state": "completed", 00:11:26.465 "digest": "sha384", 00:11:26.465 "dhgroup": "null" 00:11:26.465 } 00:11:26.465 } 00:11:26.465 ]' 00:11:26.465 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.465 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.465 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.465 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:26.466 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.724 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.724 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.724 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.982 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:26.982 15:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:27.549 15:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.549 15:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:27.549 15:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.549 15:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.549 15:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.549 15:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.549 15:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:27.549 15:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:27.807 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:11:27.807 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.807 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:27.807 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:27.807 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:27.807 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.807 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.807 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.807 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.807 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.808 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.808 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.808 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:28.375 00:11:28.375 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.375 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.375 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.633 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.633 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.633 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.633 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.633 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.633 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.633 { 00:11:28.633 "cntlid": 53, 00:11:28.633 "qid": 0, 00:11:28.633 "state": "enabled", 00:11:28.633 "thread": "nvmf_tgt_poll_group_000", 00:11:28.633 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:28.633 "listen_address": { 00:11:28.633 "trtype": "TCP", 00:11:28.633 "adrfam": "IPv4", 00:11:28.633 "traddr": "10.0.0.3", 00:11:28.633 "trsvcid": "4420" 00:11:28.633 }, 00:11:28.633 "peer_address": { 00:11:28.633 "trtype": "TCP", 00:11:28.633 "adrfam": "IPv4", 00:11:28.633 "traddr": "10.0.0.1", 00:11:28.633 "trsvcid": "55368" 00:11:28.633 }, 00:11:28.633 "auth": { 00:11:28.633 "state": "completed", 00:11:28.633 "digest": "sha384", 00:11:28.633 "dhgroup": "null" 00:11:28.633 } 00:11:28.633 } 00:11:28.633 ]' 00:11:28.633 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.633 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:28.633 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.633 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:28.634 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.634 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.634 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.634 15:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.892 15:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:28.892 15:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:29.868 15:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.868 15:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:29.868 15:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.868 15:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.868 15:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.868 15:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.868 15:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:29.868 15:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:30.141 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:11:30.141 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.141 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:30.141 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:30.141 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:30.141 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.141 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:11:30.141 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.141 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.141 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.141 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:30.142 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:30.142 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:30.400 00:11:30.400 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.400 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.400 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.658 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.658 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.658 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.658 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.658 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.658 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.658 { 00:11:30.658 "cntlid": 55, 00:11:30.658 "qid": 0, 00:11:30.658 "state": "enabled", 00:11:30.658 "thread": "nvmf_tgt_poll_group_000", 00:11:30.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:30.658 "listen_address": { 00:11:30.658 "trtype": "TCP", 00:11:30.658 "adrfam": "IPv4", 00:11:30.658 "traddr": "10.0.0.3", 00:11:30.658 "trsvcid": "4420" 00:11:30.658 }, 00:11:30.658 "peer_address": { 00:11:30.658 "trtype": "TCP", 00:11:30.658 "adrfam": "IPv4", 00:11:30.658 "traddr": "10.0.0.1", 00:11:30.658 "trsvcid": "50600" 00:11:30.658 }, 00:11:30.658 "auth": { 00:11:30.658 "state": "completed", 00:11:30.658 "digest": "sha384", 00:11:30.658 "dhgroup": "null" 00:11:30.658 } 00:11:30.658 } 00:11:30.658 ]' 00:11:30.658 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.658 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:30.658 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.917 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:30.917 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:30.917 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.917 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.917 15:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.175 15:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:31.175 15:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:31.741 15:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.741 15:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:31.741 15:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.998 15:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.998 15:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.998 15:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:31.998 15:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.998 15:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:31.998 15:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.257 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.515 00:11:32.515 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.515 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.515 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.774 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.774 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.774 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.774 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.774 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.774 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.774 { 00:11:32.774 "cntlid": 57, 00:11:32.774 "qid": 0, 00:11:32.774 "state": "enabled", 00:11:32.774 "thread": "nvmf_tgt_poll_group_000", 00:11:32.774 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:32.774 "listen_address": { 00:11:32.774 "trtype": "TCP", 00:11:32.774 "adrfam": "IPv4", 00:11:32.774 "traddr": "10.0.0.3", 00:11:32.774 "trsvcid": "4420" 00:11:32.774 }, 00:11:32.774 "peer_address": { 00:11:32.774 "trtype": "TCP", 00:11:32.774 "adrfam": "IPv4", 00:11:32.774 "traddr": "10.0.0.1", 00:11:32.774 "trsvcid": "50616" 00:11:32.774 }, 00:11:32.774 "auth": { 00:11:32.774 "state": "completed", 00:11:32.774 "digest": "sha384", 00:11:32.774 "dhgroup": "ffdhe2048" 00:11:32.774 } 00:11:32.774 } 00:11:32.774 ]' 00:11:32.774 15:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.774 15:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:32.774 15:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.032 15:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:33.032 15:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.032 15:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.032 15:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.032 15:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.290 15:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:33.290 15:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:33.856 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.856 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:33.856 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.856 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.856 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.114 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.114 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:34.114 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.372 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.631 00:11:34.631 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.631 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.631 15:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.889 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.889 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.889 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.889 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.889 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.889 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.889 { 00:11:34.889 "cntlid": 59, 00:11:34.889 "qid": 0, 00:11:34.889 "state": "enabled", 00:11:34.889 "thread": "nvmf_tgt_poll_group_000", 00:11:34.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:34.889 "listen_address": { 00:11:34.889 "trtype": "TCP", 00:11:34.889 "adrfam": "IPv4", 00:11:34.889 "traddr": "10.0.0.3", 00:11:34.889 "trsvcid": "4420" 00:11:34.889 }, 00:11:34.889 "peer_address": { 00:11:34.889 "trtype": "TCP", 00:11:34.889 "adrfam": "IPv4", 00:11:34.889 "traddr": "10.0.0.1", 00:11:34.889 "trsvcid": "50638" 00:11:34.889 }, 00:11:34.889 "auth": { 00:11:34.889 "state": "completed", 00:11:34.889 "digest": "sha384", 00:11:34.889 "dhgroup": "ffdhe2048" 00:11:34.889 } 00:11:34.889 } 00:11:34.889 ]' 00:11:34.889 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.889 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:34.889 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.147 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:35.147 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.147 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.147 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.147 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.405 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:35.405 15:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:35.971 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.971 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:35.971 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.971 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.971 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.971 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.972 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:35.972 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.230 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.797 00:11:36.797 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.797 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.797 15:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.056 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.056 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.056 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.056 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.056 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.056 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.056 { 00:11:37.056 "cntlid": 61, 00:11:37.056 "qid": 0, 00:11:37.056 "state": "enabled", 00:11:37.056 "thread": "nvmf_tgt_poll_group_000", 00:11:37.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:37.056 "listen_address": { 00:11:37.056 "trtype": "TCP", 00:11:37.056 "adrfam": "IPv4", 00:11:37.056 "traddr": "10.0.0.3", 00:11:37.056 "trsvcid": "4420" 00:11:37.056 }, 00:11:37.056 "peer_address": { 00:11:37.056 "trtype": "TCP", 00:11:37.056 "adrfam": "IPv4", 00:11:37.056 "traddr": "10.0.0.1", 00:11:37.056 "trsvcid": "50674" 00:11:37.056 }, 00:11:37.056 "auth": { 00:11:37.056 "state": "completed", 00:11:37.056 "digest": "sha384", 00:11:37.056 "dhgroup": "ffdhe2048" 00:11:37.056 } 00:11:37.056 } 00:11:37.056 ]' 00:11:37.056 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.056 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:37.056 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.056 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:37.056 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.314 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.314 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.314 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.573 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:37.573 15:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:38.139 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.139 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:38.139 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.139 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.139 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.139 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.139 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:38.139 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:38.397 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:11:38.397 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.397 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:38.397 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:38.397 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:38.397 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.397 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:11:38.397 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.397 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.397 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.398 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:38.398 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:38.398 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:38.656 00:11:38.656 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.656 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.657 15:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.915 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.915 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.915 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.915 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.915 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.915 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.915 { 00:11:38.915 "cntlid": 63, 00:11:38.915 "qid": 0, 00:11:38.915 "state": "enabled", 00:11:38.915 "thread": "nvmf_tgt_poll_group_000", 00:11:38.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:38.915 "listen_address": { 00:11:38.915 "trtype": "TCP", 00:11:38.915 "adrfam": "IPv4", 00:11:38.915 "traddr": "10.0.0.3", 00:11:38.915 "trsvcid": "4420" 00:11:38.915 }, 00:11:38.915 "peer_address": { 00:11:38.915 "trtype": "TCP", 00:11:38.915 "adrfam": "IPv4", 00:11:38.915 "traddr": "10.0.0.1", 00:11:38.915 "trsvcid": "47558" 00:11:38.915 }, 00:11:38.915 "auth": { 00:11:38.915 "state": "completed", 00:11:38.915 "digest": "sha384", 00:11:38.915 "dhgroup": "ffdhe2048" 00:11:38.915 } 00:11:38.915 } 00:11:38.915 ]' 00:11:38.915 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.915 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:38.915 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.173 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:39.173 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.173 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.173 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.173 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.431 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:39.431 15:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.366 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.367 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.367 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.367 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.367 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.941 00:11:40.941 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.941 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.941 15:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.199 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.199 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.199 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.199 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.199 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.199 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.199 { 00:11:41.199 "cntlid": 65, 00:11:41.199 "qid": 0, 00:11:41.199 "state": "enabled", 00:11:41.199 "thread": "nvmf_tgt_poll_group_000", 00:11:41.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:41.199 "listen_address": { 00:11:41.199 "trtype": "TCP", 00:11:41.199 "adrfam": "IPv4", 00:11:41.199 "traddr": "10.0.0.3", 00:11:41.199 "trsvcid": "4420" 00:11:41.199 }, 00:11:41.199 "peer_address": { 00:11:41.199 "trtype": "TCP", 00:11:41.199 "adrfam": "IPv4", 00:11:41.199 "traddr": "10.0.0.1", 00:11:41.199 "trsvcid": "47572" 00:11:41.199 }, 00:11:41.199 "auth": { 00:11:41.199 "state": "completed", 00:11:41.199 "digest": "sha384", 00:11:41.199 "dhgroup": "ffdhe3072" 00:11:41.199 } 00:11:41.199 } 00:11:41.199 ]' 00:11:41.199 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.199 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.199 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.199 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:41.199 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.458 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.458 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.458 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.717 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:41.717 15:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:42.289 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.289 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:42.289 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.289 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.289 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.289 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.289 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:42.289 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.571 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:42.830 00:11:42.830 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.830 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.830 15:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.091 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.091 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.091 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.091 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.091 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.091 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.091 { 00:11:43.091 "cntlid": 67, 00:11:43.091 "qid": 0, 00:11:43.091 "state": "enabled", 00:11:43.091 "thread": "nvmf_tgt_poll_group_000", 00:11:43.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:43.091 "listen_address": { 00:11:43.091 "trtype": "TCP", 00:11:43.091 "adrfam": "IPv4", 00:11:43.091 "traddr": "10.0.0.3", 00:11:43.091 "trsvcid": "4420" 00:11:43.091 }, 00:11:43.091 "peer_address": { 00:11:43.091 "trtype": "TCP", 00:11:43.091 "adrfam": "IPv4", 00:11:43.091 "traddr": "10.0.0.1", 00:11:43.091 "trsvcid": "47606" 00:11:43.091 }, 00:11:43.091 "auth": { 00:11:43.091 "state": "completed", 00:11:43.091 "digest": "sha384", 00:11:43.091 "dhgroup": "ffdhe3072" 00:11:43.091 } 00:11:43.091 } 00:11:43.091 ]' 00:11:43.091 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.091 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.091 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.349 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:43.349 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.349 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.349 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.349 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.607 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:43.607 15:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.542 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.543 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.543 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.543 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:44.543 15:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.111 00:11:45.111 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.111 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.111 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.111 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.111 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.111 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.111 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.111 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.111 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.111 { 00:11:45.111 "cntlid": 69, 00:11:45.111 "qid": 0, 00:11:45.111 "state": "enabled", 00:11:45.111 "thread": "nvmf_tgt_poll_group_000", 00:11:45.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:45.111 "listen_address": { 00:11:45.111 "trtype": "TCP", 00:11:45.111 "adrfam": "IPv4", 00:11:45.111 "traddr": "10.0.0.3", 00:11:45.111 "trsvcid": "4420" 00:11:45.111 }, 00:11:45.111 "peer_address": { 00:11:45.111 "trtype": "TCP", 00:11:45.111 "adrfam": "IPv4", 00:11:45.111 "traddr": "10.0.0.1", 00:11:45.111 "trsvcid": "47624" 00:11:45.111 }, 00:11:45.111 "auth": { 00:11:45.111 "state": "completed", 00:11:45.111 "digest": "sha384", 00:11:45.111 "dhgroup": "ffdhe3072" 00:11:45.111 } 00:11:45.111 } 00:11:45.111 ]' 00:11:45.111 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.370 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.370 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.370 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:45.370 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.370 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.370 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.370 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.629 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:45.629 15:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:46.195 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.195 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:46.195 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.195 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.195 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.195 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.195 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:46.195 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:46.761 15:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:47.019 00:11:47.019 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.019 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.019 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.277 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.277 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.277 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.277 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.277 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.277 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.277 { 00:11:47.277 "cntlid": 71, 00:11:47.277 "qid": 0, 00:11:47.277 "state": "enabled", 00:11:47.277 "thread": "nvmf_tgt_poll_group_000", 00:11:47.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:47.277 "listen_address": { 00:11:47.277 "trtype": "TCP", 00:11:47.277 "adrfam": "IPv4", 00:11:47.277 "traddr": "10.0.0.3", 00:11:47.277 "trsvcid": "4420" 00:11:47.277 }, 00:11:47.277 "peer_address": { 00:11:47.277 "trtype": "TCP", 00:11:47.277 "adrfam": "IPv4", 00:11:47.277 "traddr": "10.0.0.1", 00:11:47.277 "trsvcid": "47650" 00:11:47.277 }, 00:11:47.277 "auth": { 00:11:47.277 "state": "completed", 00:11:47.277 "digest": "sha384", 00:11:47.277 "dhgroup": "ffdhe3072" 00:11:47.277 } 00:11:47.277 } 00:11:47.277 ]' 00:11:47.277 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.277 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.277 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.277 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:47.277 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.535 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.535 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.535 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.793 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:47.793 15:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:48.359 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.359 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:48.359 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.359 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.359 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.359 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:48.359 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.359 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:48.359 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:48.617 15:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.183 00:11:49.183 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.183 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.183 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.442 { 00:11:49.442 "cntlid": 73, 00:11:49.442 "qid": 0, 00:11:49.442 "state": "enabled", 00:11:49.442 "thread": "nvmf_tgt_poll_group_000", 00:11:49.442 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:49.442 "listen_address": { 00:11:49.442 "trtype": "TCP", 00:11:49.442 "adrfam": "IPv4", 00:11:49.442 "traddr": "10.0.0.3", 00:11:49.442 "trsvcid": "4420" 00:11:49.442 }, 00:11:49.442 "peer_address": { 00:11:49.442 "trtype": "TCP", 00:11:49.442 "adrfam": "IPv4", 00:11:49.442 "traddr": "10.0.0.1", 00:11:49.442 "trsvcid": "54450" 00:11:49.442 }, 00:11:49.442 "auth": { 00:11:49.442 "state": "completed", 00:11:49.442 "digest": "sha384", 00:11:49.442 "dhgroup": "ffdhe4096" 00:11:49.442 } 00:11:49.442 } 00:11:49.442 ]' 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.442 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.702 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:49.702 15:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:50.639 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.639 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:50.639 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.639 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.639 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.639 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:50.639 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:50.639 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:50.898 15:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:51.159 00:11:51.159 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.159 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.159 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.727 { 00:11:51.727 "cntlid": 75, 00:11:51.727 "qid": 0, 00:11:51.727 "state": "enabled", 00:11:51.727 "thread": "nvmf_tgt_poll_group_000", 00:11:51.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:51.727 "listen_address": { 00:11:51.727 "trtype": "TCP", 00:11:51.727 "adrfam": "IPv4", 00:11:51.727 "traddr": "10.0.0.3", 00:11:51.727 "trsvcid": "4420" 00:11:51.727 }, 00:11:51.727 "peer_address": { 00:11:51.727 "trtype": "TCP", 00:11:51.727 "adrfam": "IPv4", 00:11:51.727 "traddr": "10.0.0.1", 00:11:51.727 "trsvcid": "54486" 00:11:51.727 }, 00:11:51.727 "auth": { 00:11:51.727 "state": "completed", 00:11:51.727 "digest": "sha384", 00:11:51.727 "dhgroup": "ffdhe4096" 00:11:51.727 } 00:11:51.727 } 00:11:51.727 ]' 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.727 15:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.986 15:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:51.986 15:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:11:52.922 15:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.922 15:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:52.922 15:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.922 15:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.922 15:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.922 15:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.922 15:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:52.922 15:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:52.922 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.489 00:11:53.489 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.489 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.489 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.748 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.748 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.748 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.748 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.748 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.748 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.748 { 00:11:53.748 "cntlid": 77, 00:11:53.748 "qid": 0, 00:11:53.748 "state": "enabled", 00:11:53.748 "thread": "nvmf_tgt_poll_group_000", 00:11:53.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:53.748 "listen_address": { 00:11:53.748 "trtype": "TCP", 00:11:53.748 "adrfam": "IPv4", 00:11:53.748 "traddr": "10.0.0.3", 00:11:53.748 "trsvcid": "4420" 00:11:53.748 }, 00:11:53.748 "peer_address": { 00:11:53.748 "trtype": "TCP", 00:11:53.748 "adrfam": "IPv4", 00:11:53.748 "traddr": "10.0.0.1", 00:11:53.748 "trsvcid": "54496" 00:11:53.748 }, 00:11:53.748 "auth": { 00:11:53.748 "state": "completed", 00:11:53.748 "digest": "sha384", 00:11:53.748 "dhgroup": "ffdhe4096" 00:11:53.748 } 00:11:53.748 } 00:11:53.748 ]' 00:11:53.749 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.749 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.749 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.749 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:53.749 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.749 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.749 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.749 15:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.008 15:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:54.008 15:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:11:54.944 15:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.944 15:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:54.944 15:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.944 15:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.944 15:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.944 15:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.944 15:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:54.944 15:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:55.204 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:55.464 00:11:55.464 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.464 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.464 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.077 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.077 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.077 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.077 15:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.077 15:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.077 15:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.077 { 00:11:56.077 "cntlid": 79, 00:11:56.077 "qid": 0, 00:11:56.077 "state": "enabled", 00:11:56.077 "thread": "nvmf_tgt_poll_group_000", 00:11:56.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:56.077 "listen_address": { 00:11:56.077 "trtype": "TCP", 00:11:56.077 "adrfam": "IPv4", 00:11:56.077 "traddr": "10.0.0.3", 00:11:56.077 "trsvcid": "4420" 00:11:56.077 }, 00:11:56.077 "peer_address": { 00:11:56.077 "trtype": "TCP", 00:11:56.077 "adrfam": "IPv4", 00:11:56.077 "traddr": "10.0.0.1", 00:11:56.077 "trsvcid": "54504" 00:11:56.077 }, 00:11:56.077 "auth": { 00:11:56.077 "state": "completed", 00:11:56.077 "digest": "sha384", 00:11:56.077 "dhgroup": "ffdhe4096" 00:11:56.077 } 00:11:56.077 } 00:11:56.077 ]' 00:11:56.077 15:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.077 15:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:56.077 15:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.077 15:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:56.077 15:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.077 15:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.077 15:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.077 15:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.334 15:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:56.334 15:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.268 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.269 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.269 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.269 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.269 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.269 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.269 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.836 00:11:57.836 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.836 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.836 15:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.095 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.095 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.095 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.095 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.095 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.095 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.095 { 00:11:58.095 "cntlid": 81, 00:11:58.095 "qid": 0, 00:11:58.095 "state": "enabled", 00:11:58.095 "thread": "nvmf_tgt_poll_group_000", 00:11:58.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:11:58.095 "listen_address": { 00:11:58.095 "trtype": "TCP", 00:11:58.095 "adrfam": "IPv4", 00:11:58.095 "traddr": "10.0.0.3", 00:11:58.095 "trsvcid": "4420" 00:11:58.095 }, 00:11:58.095 "peer_address": { 00:11:58.095 "trtype": "TCP", 00:11:58.095 "adrfam": "IPv4", 00:11:58.095 "traddr": "10.0.0.1", 00:11:58.095 "trsvcid": "54532" 00:11:58.095 }, 00:11:58.095 "auth": { 00:11:58.095 "state": "completed", 00:11:58.095 "digest": "sha384", 00:11:58.095 "dhgroup": "ffdhe6144" 00:11:58.095 } 00:11:58.095 } 00:11:58.095 ]' 00:11:58.095 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.353 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:58.353 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.353 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:58.353 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.353 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.353 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.353 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.612 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:58.612 15:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:11:59.548 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.548 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:11:59.548 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.548 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.548 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.548 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.548 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:59.548 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.806 15:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.373 00:12:00.373 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.373 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.373 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.631 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.632 { 00:12:00.632 "cntlid": 83, 00:12:00.632 "qid": 0, 00:12:00.632 "state": "enabled", 00:12:00.632 "thread": "nvmf_tgt_poll_group_000", 00:12:00.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:00.632 "listen_address": { 00:12:00.632 "trtype": "TCP", 00:12:00.632 "adrfam": "IPv4", 00:12:00.632 "traddr": "10.0.0.3", 00:12:00.632 "trsvcid": "4420" 00:12:00.632 }, 00:12:00.632 "peer_address": { 00:12:00.632 "trtype": "TCP", 00:12:00.632 "adrfam": "IPv4", 00:12:00.632 "traddr": "10.0.0.1", 00:12:00.632 "trsvcid": "40458" 00:12:00.632 }, 00:12:00.632 "auth": { 00:12:00.632 "state": "completed", 00:12:00.632 "digest": "sha384", 00:12:00.632 "dhgroup": "ffdhe6144" 00:12:00.632 } 00:12:00.632 } 00:12:00.632 ]' 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.632 15:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.891 15:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:00.891 15:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:01.827 15:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.827 15:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:01.827 15:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.827 15:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.827 15:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.827 15:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.827 15:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:01.827 15:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.086 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.726 00:12:02.726 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.726 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.726 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.726 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.726 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.726 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.726 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.726 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.726 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.726 { 00:12:02.726 "cntlid": 85, 00:12:02.726 "qid": 0, 00:12:02.726 "state": "enabled", 00:12:02.726 "thread": "nvmf_tgt_poll_group_000", 00:12:02.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:02.726 "listen_address": { 00:12:02.726 "trtype": "TCP", 00:12:02.726 "adrfam": "IPv4", 00:12:02.726 "traddr": "10.0.0.3", 00:12:02.726 "trsvcid": "4420" 00:12:02.726 }, 00:12:02.726 "peer_address": { 00:12:02.726 "trtype": "TCP", 00:12:02.726 "adrfam": "IPv4", 00:12:02.726 "traddr": "10.0.0.1", 00:12:02.726 "trsvcid": "40486" 00:12:02.726 }, 00:12:02.726 "auth": { 00:12:02.726 "state": "completed", 00:12:02.726 "digest": "sha384", 00:12:02.726 "dhgroup": "ffdhe6144" 00:12:02.726 } 00:12:02.726 } 00:12:02.726 ]' 00:12:02.726 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.985 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.985 16:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.985 16:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:02.985 16:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.985 16:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.985 16:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.985 16:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.243 16:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:03.243 16:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:04.176 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.176 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:04.176 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.176 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.176 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.176 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:04.176 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:04.176 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:04.435 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:04.694 00:12:04.694 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.694 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.694 16:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.953 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.953 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.953 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.953 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.953 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.953 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.953 { 00:12:04.953 "cntlid": 87, 00:12:04.953 "qid": 0, 00:12:04.953 "state": "enabled", 00:12:04.953 "thread": "nvmf_tgt_poll_group_000", 00:12:04.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:04.953 "listen_address": { 00:12:04.953 "trtype": "TCP", 00:12:04.953 "adrfam": "IPv4", 00:12:04.953 "traddr": "10.0.0.3", 00:12:04.953 "trsvcid": "4420" 00:12:04.953 }, 00:12:04.953 "peer_address": { 00:12:04.953 "trtype": "TCP", 00:12:04.953 "adrfam": "IPv4", 00:12:04.953 "traddr": "10.0.0.1", 00:12:04.953 "trsvcid": "40508" 00:12:04.953 }, 00:12:04.953 "auth": { 00:12:04.953 "state": "completed", 00:12:04.953 "digest": "sha384", 00:12:04.953 "dhgroup": "ffdhe6144" 00:12:04.953 } 00:12:04.953 } 00:12:04.953 ]' 00:12:04.953 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.213 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.213 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.213 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:05.213 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.213 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.213 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.213 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.472 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:05.472 16:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:06.041 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.041 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:06.041 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.041 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.041 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.041 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:06.041 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.041 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:06.041 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.300 16:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:06.869 00:12:07.128 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.128 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.128 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.386 { 00:12:07.386 "cntlid": 89, 00:12:07.386 "qid": 0, 00:12:07.386 "state": "enabled", 00:12:07.386 "thread": "nvmf_tgt_poll_group_000", 00:12:07.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:07.386 "listen_address": { 00:12:07.386 "trtype": "TCP", 00:12:07.386 "adrfam": "IPv4", 00:12:07.386 "traddr": "10.0.0.3", 00:12:07.386 "trsvcid": "4420" 00:12:07.386 }, 00:12:07.386 "peer_address": { 00:12:07.386 "trtype": "TCP", 00:12:07.386 "adrfam": "IPv4", 00:12:07.386 "traddr": "10.0.0.1", 00:12:07.386 "trsvcid": "40528" 00:12:07.386 }, 00:12:07.386 "auth": { 00:12:07.386 "state": "completed", 00:12:07.386 "digest": "sha384", 00:12:07.386 "dhgroup": "ffdhe8192" 00:12:07.386 } 00:12:07.386 } 00:12:07.386 ]' 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.386 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.645 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:12:07.645 16:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:12:08.581 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.582 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:08.582 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.582 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.582 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.582 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.582 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:08.582 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.841 16:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.408 00:12:09.408 16:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.408 16:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.408 16:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.782 16:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.782 16:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.782 16:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.782 16:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.782 16:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.782 16:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.782 { 00:12:09.782 "cntlid": 91, 00:12:09.782 "qid": 0, 00:12:09.782 "state": "enabled", 00:12:09.782 "thread": "nvmf_tgt_poll_group_000", 00:12:09.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:09.782 "listen_address": { 00:12:09.782 "trtype": "TCP", 00:12:09.782 "adrfam": "IPv4", 00:12:09.782 "traddr": "10.0.0.3", 00:12:09.782 "trsvcid": "4420" 00:12:09.782 }, 00:12:09.782 "peer_address": { 00:12:09.782 "trtype": "TCP", 00:12:09.782 "adrfam": "IPv4", 00:12:09.782 "traddr": "10.0.0.1", 00:12:09.782 "trsvcid": "36638" 00:12:09.782 }, 00:12:09.782 "auth": { 00:12:09.782 "state": "completed", 00:12:09.782 "digest": "sha384", 00:12:09.782 "dhgroup": "ffdhe8192" 00:12:09.782 } 00:12:09.782 } 00:12:09.782 ]' 00:12:09.782 16:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.782 16:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.782 16:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.782 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:09.782 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.041 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.041 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.041 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.299 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:10.300 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:10.917 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.917 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:10.917 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.917 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.917 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.917 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.917 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:10.917 16:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.174 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.741 00:12:11.741 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.741 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.741 16:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.000 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.000 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.000 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.000 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.000 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.000 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.000 { 00:12:12.000 "cntlid": 93, 00:12:12.000 "qid": 0, 00:12:12.000 "state": "enabled", 00:12:12.000 "thread": "nvmf_tgt_poll_group_000", 00:12:12.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:12.000 "listen_address": { 00:12:12.000 "trtype": "TCP", 00:12:12.000 "adrfam": "IPv4", 00:12:12.000 "traddr": "10.0.0.3", 00:12:12.000 "trsvcid": "4420" 00:12:12.000 }, 00:12:12.000 "peer_address": { 00:12:12.000 "trtype": "TCP", 00:12:12.000 "adrfam": "IPv4", 00:12:12.000 "traddr": "10.0.0.1", 00:12:12.000 "trsvcid": "36660" 00:12:12.000 }, 00:12:12.000 "auth": { 00:12:12.000 "state": "completed", 00:12:12.000 "digest": "sha384", 00:12:12.000 "dhgroup": "ffdhe8192" 00:12:12.000 } 00:12:12.000 } 00:12:12.000 ]' 00:12:12.000 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.000 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:12.000 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.000 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:12.000 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.260 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.260 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.260 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.518 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:12.518 16:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:13.084 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.084 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:13.084 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.084 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.084 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.084 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.084 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:13.084 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:13.650 16:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.217 00:12:14.217 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.217 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.217 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.475 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.475 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.475 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.475 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.475 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.475 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.475 { 00:12:14.475 "cntlid": 95, 00:12:14.475 "qid": 0, 00:12:14.475 "state": "enabled", 00:12:14.475 "thread": "nvmf_tgt_poll_group_000", 00:12:14.475 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:14.475 "listen_address": { 00:12:14.475 "trtype": "TCP", 00:12:14.475 "adrfam": "IPv4", 00:12:14.475 "traddr": "10.0.0.3", 00:12:14.475 "trsvcid": "4420" 00:12:14.475 }, 00:12:14.475 "peer_address": { 00:12:14.475 "trtype": "TCP", 00:12:14.475 "adrfam": "IPv4", 00:12:14.475 "traddr": "10.0.0.1", 00:12:14.475 "trsvcid": "36678" 00:12:14.475 }, 00:12:14.475 "auth": { 00:12:14.475 "state": "completed", 00:12:14.475 "digest": "sha384", 00:12:14.475 "dhgroup": "ffdhe8192" 00:12:14.475 } 00:12:14.475 } 00:12:14.475 ]' 00:12:14.475 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.475 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:14.475 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.475 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:14.475 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.734 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.734 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.734 16:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.992 16:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:14.992 16:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:15.557 16:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.557 16:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:15.557 16:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.557 16:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.557 16:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.557 16:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:15.557 16:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:15.557 16:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.557 16:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:15.557 16:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.815 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.381 00:12:16.381 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.381 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.381 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.639 { 00:12:16.639 "cntlid": 97, 00:12:16.639 "qid": 0, 00:12:16.639 "state": "enabled", 00:12:16.639 "thread": "nvmf_tgt_poll_group_000", 00:12:16.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:16.639 "listen_address": { 00:12:16.639 "trtype": "TCP", 00:12:16.639 "adrfam": "IPv4", 00:12:16.639 "traddr": "10.0.0.3", 00:12:16.639 "trsvcid": "4420" 00:12:16.639 }, 00:12:16.639 "peer_address": { 00:12:16.639 "trtype": "TCP", 00:12:16.639 "adrfam": "IPv4", 00:12:16.639 "traddr": "10.0.0.1", 00:12:16.639 "trsvcid": "36712" 00:12:16.639 }, 00:12:16.639 "auth": { 00:12:16.639 "state": "completed", 00:12:16.639 "digest": "sha512", 00:12:16.639 "dhgroup": "null" 00:12:16.639 } 00:12:16.639 } 00:12:16.639 ]' 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.639 16:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.206 16:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:12:17.206 16:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:12:17.773 16:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.773 16:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:17.773 16:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.773 16:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.773 16:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.773 16:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.773 16:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:17.773 16:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:18.031 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:18.031 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.031 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:18.031 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:18.031 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:18.031 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.031 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.031 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.031 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.031 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.031 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.031 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.032 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.290 00:12:18.290 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.290 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.290 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.548 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.548 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.548 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.548 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.807 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.807 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.807 { 00:12:18.807 "cntlid": 99, 00:12:18.807 "qid": 0, 00:12:18.807 "state": "enabled", 00:12:18.807 "thread": "nvmf_tgt_poll_group_000", 00:12:18.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:18.807 "listen_address": { 00:12:18.807 "trtype": "TCP", 00:12:18.807 "adrfam": "IPv4", 00:12:18.807 "traddr": "10.0.0.3", 00:12:18.807 "trsvcid": "4420" 00:12:18.807 }, 00:12:18.807 "peer_address": { 00:12:18.807 "trtype": "TCP", 00:12:18.807 "adrfam": "IPv4", 00:12:18.807 "traddr": "10.0.0.1", 00:12:18.807 "trsvcid": "38560" 00:12:18.807 }, 00:12:18.807 "auth": { 00:12:18.807 "state": "completed", 00:12:18.807 "digest": "sha512", 00:12:18.807 "dhgroup": "null" 00:12:18.807 } 00:12:18.807 } 00:12:18.807 ]' 00:12:18.807 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.807 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.807 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.807 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:18.807 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.807 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.807 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.807 16:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.373 16:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:19.374 16:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:19.940 16:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.940 16:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:19.940 16:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.940 16:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.940 16:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.940 16:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.940 16:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:19.940 16:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.198 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.456 00:12:20.714 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.714 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.714 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.972 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.972 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.972 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.972 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.972 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.972 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.972 { 00:12:20.972 "cntlid": 101, 00:12:20.972 "qid": 0, 00:12:20.972 "state": "enabled", 00:12:20.972 "thread": "nvmf_tgt_poll_group_000", 00:12:20.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:20.972 "listen_address": { 00:12:20.972 "trtype": "TCP", 00:12:20.972 "adrfam": "IPv4", 00:12:20.972 "traddr": "10.0.0.3", 00:12:20.972 "trsvcid": "4420" 00:12:20.972 }, 00:12:20.972 "peer_address": { 00:12:20.973 "trtype": "TCP", 00:12:20.973 "adrfam": "IPv4", 00:12:20.973 "traddr": "10.0.0.1", 00:12:20.973 "trsvcid": "38588" 00:12:20.973 }, 00:12:20.973 "auth": { 00:12:20.973 "state": "completed", 00:12:20.973 "digest": "sha512", 00:12:20.973 "dhgroup": "null" 00:12:20.973 } 00:12:20.973 } 00:12:20.973 ]' 00:12:20.973 16:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:20.973 16:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.973 16:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:20.973 16:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:20.973 16:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.973 16:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.973 16:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.973 16:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.230 16:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:21.230 16:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:22.164 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.164 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:22.164 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.164 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.164 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:22.165 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:22.732 00:12:22.733 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:22.733 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:22.733 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.733 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:22.733 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:22.733 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.733 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.733 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.733 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:22.733 { 00:12:22.733 "cntlid": 103, 00:12:22.733 "qid": 0, 00:12:22.733 "state": "enabled", 00:12:22.733 "thread": "nvmf_tgt_poll_group_000", 00:12:22.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:22.733 "listen_address": { 00:12:22.733 "trtype": "TCP", 00:12:22.733 "adrfam": "IPv4", 00:12:22.733 "traddr": "10.0.0.3", 00:12:22.733 "trsvcid": "4420" 00:12:22.733 }, 00:12:22.733 "peer_address": { 00:12:22.733 "trtype": "TCP", 00:12:22.733 "adrfam": "IPv4", 00:12:22.733 "traddr": "10.0.0.1", 00:12:22.733 "trsvcid": "38616" 00:12:22.733 }, 00:12:22.733 "auth": { 00:12:22.733 "state": "completed", 00:12:22.733 "digest": "sha512", 00:12:22.733 "dhgroup": "null" 00:12:22.733 } 00:12:22.733 } 00:12:22.733 ]' 00:12:22.991 16:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:22.991 16:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:22.991 16:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.991 16:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:22.991 16:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.991 16:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.991 16:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.991 16:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.249 16:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:23.249 16:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:24.185 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.185 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:24.185 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.185 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.185 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.185 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:24.185 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.185 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:24.185 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:24.442 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:24.442 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.442 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:24.442 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:24.442 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:24.443 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.443 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.443 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.443 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.443 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.443 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.443 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.443 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.700 00:12:24.700 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:24.700 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.700 16:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.957 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.957 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.957 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.957 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.957 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.957 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.957 { 00:12:24.957 "cntlid": 105, 00:12:24.957 "qid": 0, 00:12:24.958 "state": "enabled", 00:12:24.958 "thread": "nvmf_tgt_poll_group_000", 00:12:24.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:24.958 "listen_address": { 00:12:24.958 "trtype": "TCP", 00:12:24.958 "adrfam": "IPv4", 00:12:24.958 "traddr": "10.0.0.3", 00:12:24.958 "trsvcid": "4420" 00:12:24.958 }, 00:12:24.958 "peer_address": { 00:12:24.958 "trtype": "TCP", 00:12:24.958 "adrfam": "IPv4", 00:12:24.958 "traddr": "10.0.0.1", 00:12:24.958 "trsvcid": "38642" 00:12:24.958 }, 00:12:24.958 "auth": { 00:12:24.958 "state": "completed", 00:12:24.958 "digest": "sha512", 00:12:24.958 "dhgroup": "ffdhe2048" 00:12:24.958 } 00:12:24.958 } 00:12:24.958 ]' 00:12:24.958 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.216 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.216 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.216 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:25.216 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.216 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.216 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.216 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.475 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:12:25.475 16:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:12:26.411 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.411 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:26.411 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.411 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.411 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.411 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.411 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:26.411 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.669 16:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.927 00:12:26.927 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.927 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.927 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.493 { 00:12:27.493 "cntlid": 107, 00:12:27.493 "qid": 0, 00:12:27.493 "state": "enabled", 00:12:27.493 "thread": "nvmf_tgt_poll_group_000", 00:12:27.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:27.493 "listen_address": { 00:12:27.493 "trtype": "TCP", 00:12:27.493 "adrfam": "IPv4", 00:12:27.493 "traddr": "10.0.0.3", 00:12:27.493 "trsvcid": "4420" 00:12:27.493 }, 00:12:27.493 "peer_address": { 00:12:27.493 "trtype": "TCP", 00:12:27.493 "adrfam": "IPv4", 00:12:27.493 "traddr": "10.0.0.1", 00:12:27.493 "trsvcid": "38650" 00:12:27.493 }, 00:12:27.493 "auth": { 00:12:27.493 "state": "completed", 00:12:27.493 "digest": "sha512", 00:12:27.493 "dhgroup": "ffdhe2048" 00:12:27.493 } 00:12:27.493 } 00:12:27.493 ]' 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.493 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.751 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:27.751 16:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:28.689 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.689 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:28.689 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.689 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.689 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.689 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.689 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:28.689 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:28.947 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:12:28.947 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.947 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:28.947 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:28.947 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:28.947 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.947 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.947 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.947 16:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.947 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.947 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.947 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.947 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.206 00:12:29.206 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.206 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.206 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.464 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.465 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.465 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.465 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.465 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.465 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.465 { 00:12:29.465 "cntlid": 109, 00:12:29.465 "qid": 0, 00:12:29.465 "state": "enabled", 00:12:29.465 "thread": "nvmf_tgt_poll_group_000", 00:12:29.465 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:29.465 "listen_address": { 00:12:29.465 "trtype": "TCP", 00:12:29.465 "adrfam": "IPv4", 00:12:29.465 "traddr": "10.0.0.3", 00:12:29.465 "trsvcid": "4420" 00:12:29.465 }, 00:12:29.465 "peer_address": { 00:12:29.465 "trtype": "TCP", 00:12:29.465 "adrfam": "IPv4", 00:12:29.465 "traddr": "10.0.0.1", 00:12:29.465 "trsvcid": "54312" 00:12:29.465 }, 00:12:29.465 "auth": { 00:12:29.465 "state": "completed", 00:12:29.465 "digest": "sha512", 00:12:29.465 "dhgroup": "ffdhe2048" 00:12:29.465 } 00:12:29.465 } 00:12:29.465 ]' 00:12:29.465 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.723 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.723 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.723 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:29.723 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.723 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.723 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.723 16:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.981 16:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:29.981 16:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:30.917 16:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.917 16:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:30.917 16:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.917 16:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.917 16:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.917 16:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.917 16:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:30.917 16:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.917 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:31.484 00:12:31.484 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.484 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.484 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.748 { 00:12:31.748 "cntlid": 111, 00:12:31.748 "qid": 0, 00:12:31.748 "state": "enabled", 00:12:31.748 "thread": "nvmf_tgt_poll_group_000", 00:12:31.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:31.748 "listen_address": { 00:12:31.748 "trtype": "TCP", 00:12:31.748 "adrfam": "IPv4", 00:12:31.748 "traddr": "10.0.0.3", 00:12:31.748 "trsvcid": "4420" 00:12:31.748 }, 00:12:31.748 "peer_address": { 00:12:31.748 "trtype": "TCP", 00:12:31.748 "adrfam": "IPv4", 00:12:31.748 "traddr": "10.0.0.1", 00:12:31.748 "trsvcid": "54332" 00:12:31.748 }, 00:12:31.748 "auth": { 00:12:31.748 "state": "completed", 00:12:31.748 "digest": "sha512", 00:12:31.748 "dhgroup": "ffdhe2048" 00:12:31.748 } 00:12:31.748 } 00:12:31.748 ]' 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.748 16:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.314 16:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:32.314 16:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:32.882 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.882 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:32.882 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.882 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.882 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.882 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:32.882 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.882 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:32.882 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.140 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.707 00:12:33.707 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.707 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.707 16:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.965 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.965 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.965 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.965 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.965 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.965 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.965 { 00:12:33.965 "cntlid": 113, 00:12:33.965 "qid": 0, 00:12:33.965 "state": "enabled", 00:12:33.965 "thread": "nvmf_tgt_poll_group_000", 00:12:33.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:33.965 "listen_address": { 00:12:33.965 "trtype": "TCP", 00:12:33.965 "adrfam": "IPv4", 00:12:33.965 "traddr": "10.0.0.3", 00:12:33.965 "trsvcid": "4420" 00:12:33.965 }, 00:12:33.965 "peer_address": { 00:12:33.965 "trtype": "TCP", 00:12:33.965 "adrfam": "IPv4", 00:12:33.965 "traddr": "10.0.0.1", 00:12:33.965 "trsvcid": "54362" 00:12:33.965 }, 00:12:33.965 "auth": { 00:12:33.965 "state": "completed", 00:12:33.965 "digest": "sha512", 00:12:33.965 "dhgroup": "ffdhe3072" 00:12:33.965 } 00:12:33.965 } 00:12:33.965 ]' 00:12:33.965 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.965 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.965 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.965 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:33.965 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.222 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.222 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.222 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.479 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:12:34.480 16:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:12:35.047 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.047 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:35.047 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.047 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.047 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.047 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.047 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:35.047 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:35.613 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:12:35.613 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.613 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:35.613 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:35.613 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:35.613 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.613 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.614 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.614 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.614 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.614 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.614 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.614 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.872 00:12:35.872 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.872 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.872 16:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.131 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.131 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.131 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.131 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.131 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.131 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.131 { 00:12:36.131 "cntlid": 115, 00:12:36.131 "qid": 0, 00:12:36.131 "state": "enabled", 00:12:36.131 "thread": "nvmf_tgt_poll_group_000", 00:12:36.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:36.131 "listen_address": { 00:12:36.131 "trtype": "TCP", 00:12:36.131 "adrfam": "IPv4", 00:12:36.131 "traddr": "10.0.0.3", 00:12:36.131 "trsvcid": "4420" 00:12:36.131 }, 00:12:36.131 "peer_address": { 00:12:36.131 "trtype": "TCP", 00:12:36.131 "adrfam": "IPv4", 00:12:36.131 "traddr": "10.0.0.1", 00:12:36.131 "trsvcid": "54404" 00:12:36.131 }, 00:12:36.131 "auth": { 00:12:36.131 "state": "completed", 00:12:36.131 "digest": "sha512", 00:12:36.131 "dhgroup": "ffdhe3072" 00:12:36.131 } 00:12:36.131 } 00:12:36.131 ]' 00:12:36.131 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.131 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:36.131 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.389 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:36.389 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.389 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.389 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.389 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.663 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:36.663 16:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:37.252 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.252 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:37.252 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.252 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.252 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.252 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.252 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:37.252 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.823 16:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.083 00:12:38.083 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.083 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.083 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.341 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.341 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.341 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.341 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.341 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.341 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.341 { 00:12:38.341 "cntlid": 117, 00:12:38.341 "qid": 0, 00:12:38.341 "state": "enabled", 00:12:38.341 "thread": "nvmf_tgt_poll_group_000", 00:12:38.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:38.342 "listen_address": { 00:12:38.342 "trtype": "TCP", 00:12:38.342 "adrfam": "IPv4", 00:12:38.342 "traddr": "10.0.0.3", 00:12:38.342 "trsvcid": "4420" 00:12:38.342 }, 00:12:38.342 "peer_address": { 00:12:38.342 "trtype": "TCP", 00:12:38.342 "adrfam": "IPv4", 00:12:38.342 "traddr": "10.0.0.1", 00:12:38.342 "trsvcid": "54438" 00:12:38.342 }, 00:12:38.342 "auth": { 00:12:38.342 "state": "completed", 00:12:38.342 "digest": "sha512", 00:12:38.342 "dhgroup": "ffdhe3072" 00:12:38.342 } 00:12:38.342 } 00:12:38.342 ]' 00:12:38.342 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.600 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:38.601 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.601 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:38.601 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.601 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.601 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.601 16:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.859 16:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:38.859 16:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:39.798 16:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.799 16:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:39.799 16:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.799 16:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.799 16:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.799 16:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.799 16:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:39.799 16:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.799 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:40.430 00:12:40.430 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.430 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.430 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.689 { 00:12:40.689 "cntlid": 119, 00:12:40.689 "qid": 0, 00:12:40.689 "state": "enabled", 00:12:40.689 "thread": "nvmf_tgt_poll_group_000", 00:12:40.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:40.689 "listen_address": { 00:12:40.689 "trtype": "TCP", 00:12:40.689 "adrfam": "IPv4", 00:12:40.689 "traddr": "10.0.0.3", 00:12:40.689 "trsvcid": "4420" 00:12:40.689 }, 00:12:40.689 "peer_address": { 00:12:40.689 "trtype": "TCP", 00:12:40.689 "adrfam": "IPv4", 00:12:40.689 "traddr": "10.0.0.1", 00:12:40.689 "trsvcid": "39732" 00:12:40.689 }, 00:12:40.689 "auth": { 00:12:40.689 "state": "completed", 00:12:40.689 "digest": "sha512", 00:12:40.689 "dhgroup": "ffdhe3072" 00:12:40.689 } 00:12:40.689 } 00:12:40.689 ]' 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.689 16:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.254 16:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:41.254 16:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:41.821 16:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.821 16:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:41.821 16:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.821 16:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.821 16:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.821 16:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:41.821 16:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.821 16:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:41.821 16:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.080 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:42.647 00:12:42.647 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.647 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.647 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.906 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.906 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.906 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.906 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.906 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.906 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.906 { 00:12:42.906 "cntlid": 121, 00:12:42.906 "qid": 0, 00:12:42.906 "state": "enabled", 00:12:42.906 "thread": "nvmf_tgt_poll_group_000", 00:12:42.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:42.906 "listen_address": { 00:12:42.906 "trtype": "TCP", 00:12:42.906 "adrfam": "IPv4", 00:12:42.906 "traddr": "10.0.0.3", 00:12:42.906 "trsvcid": "4420" 00:12:42.906 }, 00:12:42.906 "peer_address": { 00:12:42.906 "trtype": "TCP", 00:12:42.906 "adrfam": "IPv4", 00:12:42.906 "traddr": "10.0.0.1", 00:12:42.906 "trsvcid": "39758" 00:12:42.906 }, 00:12:42.906 "auth": { 00:12:42.906 "state": "completed", 00:12:42.906 "digest": "sha512", 00:12:42.906 "dhgroup": "ffdhe4096" 00:12:42.906 } 00:12:42.906 } 00:12:42.906 ]' 00:12:42.906 16:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.906 16:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:42.906 16:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.906 16:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:42.906 16:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.906 16:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.906 16:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.906 16:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.165 16:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:12:43.165 16:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:12:44.110 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.110 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:44.110 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.110 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.110 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.110 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.110 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:44.110 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:44.368 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:12:44.368 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.368 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:44.368 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:44.368 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:44.368 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.368 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.368 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.368 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.368 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.368 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.369 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.369 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.627 00:12:44.627 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.627 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.627 16:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.886 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.886 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.886 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.886 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.886 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.886 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.886 { 00:12:44.886 "cntlid": 123, 00:12:44.886 "qid": 0, 00:12:44.886 "state": "enabled", 00:12:44.886 "thread": "nvmf_tgt_poll_group_000", 00:12:44.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:44.886 "listen_address": { 00:12:44.886 "trtype": "TCP", 00:12:44.886 "adrfam": "IPv4", 00:12:44.886 "traddr": "10.0.0.3", 00:12:44.886 "trsvcid": "4420" 00:12:44.886 }, 00:12:44.886 "peer_address": { 00:12:44.886 "trtype": "TCP", 00:12:44.886 "adrfam": "IPv4", 00:12:44.886 "traddr": "10.0.0.1", 00:12:44.886 "trsvcid": "39792" 00:12:44.886 }, 00:12:44.886 "auth": { 00:12:44.886 "state": "completed", 00:12:44.886 "digest": "sha512", 00:12:44.886 "dhgroup": "ffdhe4096" 00:12:44.886 } 00:12:44.886 } 00:12:44.886 ]' 00:12:44.886 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.145 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.145 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.145 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:45.145 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.145 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.145 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.145 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.403 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:45.404 16:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.338 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.907 00:12:46.907 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.907 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.907 16:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.165 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.165 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.165 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.165 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.165 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.165 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.165 { 00:12:47.165 "cntlid": 125, 00:12:47.165 "qid": 0, 00:12:47.165 "state": "enabled", 00:12:47.165 "thread": "nvmf_tgt_poll_group_000", 00:12:47.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:47.165 "listen_address": { 00:12:47.165 "trtype": "TCP", 00:12:47.165 "adrfam": "IPv4", 00:12:47.165 "traddr": "10.0.0.3", 00:12:47.165 "trsvcid": "4420" 00:12:47.165 }, 00:12:47.165 "peer_address": { 00:12:47.165 "trtype": "TCP", 00:12:47.165 "adrfam": "IPv4", 00:12:47.165 "traddr": "10.0.0.1", 00:12:47.165 "trsvcid": "39808" 00:12:47.165 }, 00:12:47.165 "auth": { 00:12:47.165 "state": "completed", 00:12:47.165 "digest": "sha512", 00:12:47.165 "dhgroup": "ffdhe4096" 00:12:47.165 } 00:12:47.165 } 00:12:47.165 ]' 00:12:47.165 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.165 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.165 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.165 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:47.165 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.422 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.422 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.422 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.679 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:47.679 16:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:48.244 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.244 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:48.244 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.244 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.244 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.244 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.244 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:48.244 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.501 16:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:49.066 00:12:49.066 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.066 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.066 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.326 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.326 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.326 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.326 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.326 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.326 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.326 { 00:12:49.326 "cntlid": 127, 00:12:49.326 "qid": 0, 00:12:49.326 "state": "enabled", 00:12:49.326 "thread": "nvmf_tgt_poll_group_000", 00:12:49.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:49.326 "listen_address": { 00:12:49.326 "trtype": "TCP", 00:12:49.326 "adrfam": "IPv4", 00:12:49.326 "traddr": "10.0.0.3", 00:12:49.326 "trsvcid": "4420" 00:12:49.326 }, 00:12:49.326 "peer_address": { 00:12:49.326 "trtype": "TCP", 00:12:49.326 "adrfam": "IPv4", 00:12:49.326 "traddr": "10.0.0.1", 00:12:49.326 "trsvcid": "43884" 00:12:49.326 }, 00:12:49.326 "auth": { 00:12:49.326 "state": "completed", 00:12:49.326 "digest": "sha512", 00:12:49.326 "dhgroup": "ffdhe4096" 00:12:49.326 } 00:12:49.326 } 00:12:49.326 ]' 00:12:49.326 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.326 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.326 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.326 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:49.326 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.585 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.585 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.585 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.843 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:49.843 16:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:50.409 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.409 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:50.409 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.409 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.409 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.409 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:50.409 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.409 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:50.409 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.667 16:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:51.233 00:12:51.233 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.233 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.233 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.490 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.490 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.490 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.490 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.490 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.490 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.490 { 00:12:51.490 "cntlid": 129, 00:12:51.490 "qid": 0, 00:12:51.490 "state": "enabled", 00:12:51.490 "thread": "nvmf_tgt_poll_group_000", 00:12:51.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:51.490 "listen_address": { 00:12:51.490 "trtype": "TCP", 00:12:51.490 "adrfam": "IPv4", 00:12:51.490 "traddr": "10.0.0.3", 00:12:51.490 "trsvcid": "4420" 00:12:51.490 }, 00:12:51.490 "peer_address": { 00:12:51.490 "trtype": "TCP", 00:12:51.490 "adrfam": "IPv4", 00:12:51.490 "traddr": "10.0.0.1", 00:12:51.490 "trsvcid": "43898" 00:12:51.490 }, 00:12:51.490 "auth": { 00:12:51.491 "state": "completed", 00:12:51.491 "digest": "sha512", 00:12:51.491 "dhgroup": "ffdhe6144" 00:12:51.491 } 00:12:51.491 } 00:12:51.491 ]' 00:12:51.491 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.750 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.750 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.750 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:51.750 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.750 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.750 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.750 16:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.009 16:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:12:52.009 16:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:12:52.942 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.942 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:52.942 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.942 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.942 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.943 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.943 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:52.943 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.202 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.769 00:12:53.769 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.769 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.769 16:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.026 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.026 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.027 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.027 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.027 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.027 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.027 { 00:12:54.027 "cntlid": 131, 00:12:54.027 "qid": 0, 00:12:54.027 "state": "enabled", 00:12:54.027 "thread": "nvmf_tgt_poll_group_000", 00:12:54.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:54.027 "listen_address": { 00:12:54.027 "trtype": "TCP", 00:12:54.027 "adrfam": "IPv4", 00:12:54.027 "traddr": "10.0.0.3", 00:12:54.027 "trsvcid": "4420" 00:12:54.027 }, 00:12:54.027 "peer_address": { 00:12:54.027 "trtype": "TCP", 00:12:54.027 "adrfam": "IPv4", 00:12:54.027 "traddr": "10.0.0.1", 00:12:54.027 "trsvcid": "43928" 00:12:54.027 }, 00:12:54.027 "auth": { 00:12:54.027 "state": "completed", 00:12:54.027 "digest": "sha512", 00:12:54.027 "dhgroup": "ffdhe6144" 00:12:54.027 } 00:12:54.027 } 00:12:54.027 ]' 00:12:54.027 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.027 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.027 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.284 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:54.284 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.284 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.285 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.285 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.543 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:54.543 16:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.478 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.737 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.737 16:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.995 00:12:56.252 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.252 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.252 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.512 { 00:12:56.512 "cntlid": 133, 00:12:56.512 "qid": 0, 00:12:56.512 "state": "enabled", 00:12:56.512 "thread": "nvmf_tgt_poll_group_000", 00:12:56.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:56.512 "listen_address": { 00:12:56.512 "trtype": "TCP", 00:12:56.512 "adrfam": "IPv4", 00:12:56.512 "traddr": "10.0.0.3", 00:12:56.512 "trsvcid": "4420" 00:12:56.512 }, 00:12:56.512 "peer_address": { 00:12:56.512 "trtype": "TCP", 00:12:56.512 "adrfam": "IPv4", 00:12:56.512 "traddr": "10.0.0.1", 00:12:56.512 "trsvcid": "43956" 00:12:56.512 }, 00:12:56.512 "auth": { 00:12:56.512 "state": "completed", 00:12:56.512 "digest": "sha512", 00:12:56.512 "dhgroup": "ffdhe6144" 00:12:56.512 } 00:12:56.512 } 00:12:56.512 ]' 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.512 16:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.079 16:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:57.079 16:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:12:57.646 16:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.646 16:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:57.646 16:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.646 16:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.646 16:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.646 16:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.646 16:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:57.646 16:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:57.905 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.471 00:12:58.471 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.471 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.471 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.730 { 00:12:58.730 "cntlid": 135, 00:12:58.730 "qid": 0, 00:12:58.730 "state": "enabled", 00:12:58.730 "thread": "nvmf_tgt_poll_group_000", 00:12:58.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:12:58.730 "listen_address": { 00:12:58.730 "trtype": "TCP", 00:12:58.730 "adrfam": "IPv4", 00:12:58.730 "traddr": "10.0.0.3", 00:12:58.730 "trsvcid": "4420" 00:12:58.730 }, 00:12:58.730 "peer_address": { 00:12:58.730 "trtype": "TCP", 00:12:58.730 "adrfam": "IPv4", 00:12:58.730 "traddr": "10.0.0.1", 00:12:58.730 "trsvcid": "43970" 00:12:58.730 }, 00:12:58.730 "auth": { 00:12:58.730 "state": "completed", 00:12:58.730 "digest": "sha512", 00:12:58.730 "dhgroup": "ffdhe6144" 00:12:58.730 } 00:12:58.730 } 00:12:58.730 ]' 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.730 16:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.296 16:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:59.296 16:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:12:59.863 16:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.863 16:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:12:59.863 16:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.863 16:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.863 16:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.863 16:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:59.863 16:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.863 16:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:59.863 16:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.122 16:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:01.062 00:13:01.062 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.062 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.062 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.319 { 00:13:01.319 "cntlid": 137, 00:13:01.319 "qid": 0, 00:13:01.319 "state": "enabled", 00:13:01.319 "thread": "nvmf_tgt_poll_group_000", 00:13:01.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:01.319 "listen_address": { 00:13:01.319 "trtype": "TCP", 00:13:01.319 "adrfam": "IPv4", 00:13:01.319 "traddr": "10.0.0.3", 00:13:01.319 "trsvcid": "4420" 00:13:01.319 }, 00:13:01.319 "peer_address": { 00:13:01.319 "trtype": "TCP", 00:13:01.319 "adrfam": "IPv4", 00:13:01.319 "traddr": "10.0.0.1", 00:13:01.319 "trsvcid": "49864" 00:13:01.319 }, 00:13:01.319 "auth": { 00:13:01.319 "state": "completed", 00:13:01.319 "digest": "sha512", 00:13:01.319 "dhgroup": "ffdhe8192" 00:13:01.319 } 00:13:01.319 } 00:13:01.319 ]' 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.319 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.578 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:13:01.578 16:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:13:02.516 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.516 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:02.516 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.516 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.516 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.516 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.516 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:02.516 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.775 16:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:03.344 00:13:03.344 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.344 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.344 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.602 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.602 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.602 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.602 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.602 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.602 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.602 { 00:13:03.602 "cntlid": 139, 00:13:03.602 "qid": 0, 00:13:03.602 "state": "enabled", 00:13:03.602 "thread": "nvmf_tgt_poll_group_000", 00:13:03.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:03.602 "listen_address": { 00:13:03.602 "trtype": "TCP", 00:13:03.602 "adrfam": "IPv4", 00:13:03.602 "traddr": "10.0.0.3", 00:13:03.602 "trsvcid": "4420" 00:13:03.602 }, 00:13:03.602 "peer_address": { 00:13:03.602 "trtype": "TCP", 00:13:03.602 "adrfam": "IPv4", 00:13:03.602 "traddr": "10.0.0.1", 00:13:03.602 "trsvcid": "49886" 00:13:03.602 }, 00:13:03.602 "auth": { 00:13:03.602 "state": "completed", 00:13:03.602 "digest": "sha512", 00:13:03.602 "dhgroup": "ffdhe8192" 00:13:03.602 } 00:13:03.602 } 00:13:03.602 ]' 00:13:03.602 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.602 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.602 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.860 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:03.860 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.860 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.860 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.860 16:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.122 16:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:13:04.122 16:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: --dhchap-ctrl-secret DHHC-1:02:Yjg3YmFlYTI3ZDI2ODYwZTFjOGQyOTU3YjY4N2I3OTY1MTk1ZThlNjI0OTg3NzlijsKZeA==: 00:13:04.694 16:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.694 16:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:04.694 16:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.694 16:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.694 16:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.694 16:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.694 16:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:04.694 16:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.953 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.521 00:13:05.779 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.779 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.779 16:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.038 { 00:13:06.038 "cntlid": 141, 00:13:06.038 "qid": 0, 00:13:06.038 "state": "enabled", 00:13:06.038 "thread": "nvmf_tgt_poll_group_000", 00:13:06.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:06.038 "listen_address": { 00:13:06.038 "trtype": "TCP", 00:13:06.038 "adrfam": "IPv4", 00:13:06.038 "traddr": "10.0.0.3", 00:13:06.038 "trsvcid": "4420" 00:13:06.038 }, 00:13:06.038 "peer_address": { 00:13:06.038 "trtype": "TCP", 00:13:06.038 "adrfam": "IPv4", 00:13:06.038 "traddr": "10.0.0.1", 00:13:06.038 "trsvcid": "49908" 00:13:06.038 }, 00:13:06.038 "auth": { 00:13:06.038 "state": "completed", 00:13:06.038 "digest": "sha512", 00:13:06.038 "dhgroup": "ffdhe8192" 00:13:06.038 } 00:13:06.038 } 00:13:06.038 ]' 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.038 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.605 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:13:06.605 16:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:01:NTA5ZDcwODU4YTMxZGU5Y2E2MmIyOGUzMmNlZjhkNTaSqmoA: 00:13:07.243 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.243 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:07.243 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.243 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.243 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.243 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.243 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:07.243 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:07.501 16:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.068 00:13:08.068 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.068 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.068 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.635 { 00:13:08.635 "cntlid": 143, 00:13:08.635 "qid": 0, 00:13:08.635 "state": "enabled", 00:13:08.635 "thread": "nvmf_tgt_poll_group_000", 00:13:08.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:08.635 "listen_address": { 00:13:08.635 "trtype": "TCP", 00:13:08.635 "adrfam": "IPv4", 00:13:08.635 "traddr": "10.0.0.3", 00:13:08.635 "trsvcid": "4420" 00:13:08.635 }, 00:13:08.635 "peer_address": { 00:13:08.635 "trtype": "TCP", 00:13:08.635 "adrfam": "IPv4", 00:13:08.635 "traddr": "10.0.0.1", 00:13:08.635 "trsvcid": "49938" 00:13:08.635 }, 00:13:08.635 "auth": { 00:13:08.635 "state": "completed", 00:13:08.635 "digest": "sha512", 00:13:08.635 "dhgroup": "ffdhe8192" 00:13:08.635 } 00:13:08.635 } 00:13:08.635 ]' 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.635 16:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.894 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:13:08.894 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:13:09.461 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.461 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:09.461 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.461 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.461 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.461 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:09.461 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:09.461 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:09.461 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:09.461 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:09.461 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:09.720 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:09.720 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.720 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:09.720 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:09.720 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:09.720 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.720 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.720 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.720 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.979 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.979 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.979 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.979 16:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.547 00:13:10.547 16:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.547 16:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.547 16:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.806 16:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.806 16:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.806 16:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.806 16:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.806 16:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.806 16:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.806 { 00:13:10.806 "cntlid": 145, 00:13:10.806 "qid": 0, 00:13:10.806 "state": "enabled", 00:13:10.806 "thread": "nvmf_tgt_poll_group_000", 00:13:10.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:10.806 "listen_address": { 00:13:10.806 "trtype": "TCP", 00:13:10.806 "adrfam": "IPv4", 00:13:10.806 "traddr": "10.0.0.3", 00:13:10.806 "trsvcid": "4420" 00:13:10.806 }, 00:13:10.806 "peer_address": { 00:13:10.806 "trtype": "TCP", 00:13:10.806 "adrfam": "IPv4", 00:13:10.806 "traddr": "10.0.0.1", 00:13:10.806 "trsvcid": "40970" 00:13:10.806 }, 00:13:10.806 "auth": { 00:13:10.806 "state": "completed", 00:13:10.806 "digest": "sha512", 00:13:10.806 "dhgroup": "ffdhe8192" 00:13:10.806 } 00:13:10.806 } 00:13:10.806 ]' 00:13:10.806 16:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.806 16:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.806 16:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.064 16:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:11.064 16:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.064 16:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.064 16:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.064 16:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.322 16:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:13:11.322 16:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:00:NzEwY2JjYTc5YjExNzBiNjJkYjYyODFkMGY5NTczNzM5ZWFiNmMzNDQ1NDYzOWRj/FsFvA==: --dhchap-ctrl-secret DHHC-1:03:OTUwZjg2NzAyYmYyYmQ0OGZhYmJmY2M0YjhiZjEzY2Q3OTZhY2E1YzM2ZTM0ZTA3OTE4MjJiYjVkOWFlOGJkMz0ksFs=: 00:13:11.889 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.889 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:11.889 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.889 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:12.147 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:12.716 request: 00:13:12.716 { 00:13:12.716 "name": "nvme0", 00:13:12.716 "trtype": "tcp", 00:13:12.716 "traddr": "10.0.0.3", 00:13:12.716 "adrfam": "ipv4", 00:13:12.716 "trsvcid": "4420", 00:13:12.716 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:12.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:12.716 "prchk_reftag": false, 00:13:12.716 "prchk_guard": false, 00:13:12.716 "hdgst": false, 00:13:12.716 "ddgst": false, 00:13:12.716 "dhchap_key": "key2", 00:13:12.716 "allow_unrecognized_csi": false, 00:13:12.716 "method": "bdev_nvme_attach_controller", 00:13:12.716 "req_id": 1 00:13:12.716 } 00:13:12.716 Got JSON-RPC error response 00:13:12.716 response: 00:13:12.716 { 00:13:12.716 "code": -5, 00:13:12.716 "message": "Input/output error" 00:13:12.716 } 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:12.716 16:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:13.283 request: 00:13:13.283 { 00:13:13.283 "name": "nvme0", 00:13:13.283 "trtype": "tcp", 00:13:13.283 "traddr": "10.0.0.3", 00:13:13.283 "adrfam": "ipv4", 00:13:13.283 "trsvcid": "4420", 00:13:13.283 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:13.283 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:13.283 "prchk_reftag": false, 00:13:13.283 "prchk_guard": false, 00:13:13.283 "hdgst": false, 00:13:13.283 "ddgst": false, 00:13:13.283 "dhchap_key": "key1", 00:13:13.283 "dhchap_ctrlr_key": "ckey2", 00:13:13.283 "allow_unrecognized_csi": false, 00:13:13.283 "method": "bdev_nvme_attach_controller", 00:13:13.283 "req_id": 1 00:13:13.283 } 00:13:13.283 Got JSON-RPC error response 00:13:13.283 response: 00:13:13.283 { 00:13:13.283 "code": -5, 00:13:13.283 "message": "Input/output error" 00:13:13.283 } 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.283 16:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.850 request: 00:13:13.850 { 00:13:13.850 "name": "nvme0", 00:13:13.850 "trtype": "tcp", 00:13:13.850 "traddr": "10.0.0.3", 00:13:13.850 "adrfam": "ipv4", 00:13:13.850 "trsvcid": "4420", 00:13:13.850 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:13.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:13.850 "prchk_reftag": false, 00:13:13.850 "prchk_guard": false, 00:13:13.850 "hdgst": false, 00:13:13.850 "ddgst": false, 00:13:13.850 "dhchap_key": "key1", 00:13:13.850 "dhchap_ctrlr_key": "ckey1", 00:13:13.850 "allow_unrecognized_csi": false, 00:13:13.850 "method": "bdev_nvme_attach_controller", 00:13:13.850 "req_id": 1 00:13:13.850 } 00:13:13.850 Got JSON-RPC error response 00:13:13.850 response: 00:13:13.850 { 00:13:13.850 "code": -5, 00:13:13.850 "message": "Input/output error" 00:13:13.850 } 00:13:13.850 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:13.850 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:13.850 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:13.850 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:13.850 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:13.850 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.850 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.850 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.850 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 67536 00:13:13.850 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67536 ']' 00:13:13.850 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67536 00:13:13.851 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:13.851 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.851 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67536 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.109 killing process with pid 67536 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67536' 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67536 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67536 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70669 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70669 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70669 ']' 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.109 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70669 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 70669 ']' 00:13:14.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.675 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.934 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.934 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:14.934 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:14.934 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.934 16:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.934 null0 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tlQ 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.yWm ]] 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yWm 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rmU 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.qQf ]] 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qQf 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Og0 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.3Yg ]] 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3Yg 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.fTI 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:14.934 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:14.935 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.935 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:14.935 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:14.935 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:14.935 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.935 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:13:14.935 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.935 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.935 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.935 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:14.935 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:14.935 16:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.870 nvme0n1 00:13:16.128 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.128 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.128 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.386 { 00:13:16.386 "cntlid": 1, 00:13:16.386 "qid": 0, 00:13:16.386 "state": "enabled", 00:13:16.386 "thread": "nvmf_tgt_poll_group_000", 00:13:16.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:16.386 "listen_address": { 00:13:16.386 "trtype": "TCP", 00:13:16.386 "adrfam": "IPv4", 00:13:16.386 "traddr": "10.0.0.3", 00:13:16.386 "trsvcid": "4420" 00:13:16.386 }, 00:13:16.386 "peer_address": { 00:13:16.386 "trtype": "TCP", 00:13:16.386 "adrfam": "IPv4", 00:13:16.386 "traddr": "10.0.0.1", 00:13:16.386 "trsvcid": "41018" 00:13:16.386 }, 00:13:16.386 "auth": { 00:13:16.386 "state": "completed", 00:13:16.386 "digest": "sha512", 00:13:16.386 "dhgroup": "ffdhe8192" 00:13:16.386 } 00:13:16.386 } 00:13:16.386 ]' 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.386 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.953 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:13:16.953 16:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:13:17.519 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.519 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:17.519 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.519 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.519 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.519 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key3 00:13:17.519 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.519 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.519 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.519 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:17.519 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:17.776 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:17.776 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:17.776 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:17.776 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:17.776 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.776 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:17.776 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.776 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:17.776 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.776 16:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.342 request: 00:13:18.342 { 00:13:18.342 "name": "nvme0", 00:13:18.342 "trtype": "tcp", 00:13:18.342 "traddr": "10.0.0.3", 00:13:18.342 "adrfam": "ipv4", 00:13:18.342 "trsvcid": "4420", 00:13:18.342 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:18.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:18.342 "prchk_reftag": false, 00:13:18.342 "prchk_guard": false, 00:13:18.342 "hdgst": false, 00:13:18.342 "ddgst": false, 00:13:18.342 "dhchap_key": "key3", 00:13:18.342 "allow_unrecognized_csi": false, 00:13:18.343 "method": "bdev_nvme_attach_controller", 00:13:18.343 "req_id": 1 00:13:18.343 } 00:13:18.343 Got JSON-RPC error response 00:13:18.343 response: 00:13:18.343 { 00:13:18.343 "code": -5, 00:13:18.343 "message": "Input/output error" 00:13:18.343 } 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.343 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.909 request: 00:13:18.909 { 00:13:18.909 "name": "nvme0", 00:13:18.909 "trtype": "tcp", 00:13:18.909 "traddr": "10.0.0.3", 00:13:18.909 "adrfam": "ipv4", 00:13:18.909 "trsvcid": "4420", 00:13:18.909 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:18.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:18.909 "prchk_reftag": false, 00:13:18.909 "prchk_guard": false, 00:13:18.909 "hdgst": false, 00:13:18.909 "ddgst": false, 00:13:18.909 "dhchap_key": "key3", 00:13:18.909 "allow_unrecognized_csi": false, 00:13:18.909 "method": "bdev_nvme_attach_controller", 00:13:18.909 "req_id": 1 00:13:18.909 } 00:13:18.909 Got JSON-RPC error response 00:13:18.909 response: 00:13:18.909 { 00:13:18.909 "code": -5, 00:13:18.909 "message": "Input/output error" 00:13:18.909 } 00:13:18.909 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:18.909 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:18.909 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:18.909 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:18.909 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:18.909 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:18.909 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:18.909 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:18.909 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:18.909 16:01:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:19.168 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:19.777 request: 00:13:19.777 { 00:13:19.777 "name": "nvme0", 00:13:19.777 "trtype": "tcp", 00:13:19.777 "traddr": "10.0.0.3", 00:13:19.777 "adrfam": "ipv4", 00:13:19.777 "trsvcid": "4420", 00:13:19.777 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:19.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:19.778 "prchk_reftag": false, 00:13:19.778 "prchk_guard": false, 00:13:19.778 "hdgst": false, 00:13:19.778 "ddgst": false, 00:13:19.778 "dhchap_key": "key0", 00:13:19.778 "dhchap_ctrlr_key": "key1", 00:13:19.778 "allow_unrecognized_csi": false, 00:13:19.778 "method": "bdev_nvme_attach_controller", 00:13:19.778 "req_id": 1 00:13:19.778 } 00:13:19.778 Got JSON-RPC error response 00:13:19.778 response: 00:13:19.778 { 00:13:19.778 "code": -5, 00:13:19.778 "message": "Input/output error" 00:13:19.778 } 00:13:19.778 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:19.778 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:19.778 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:19.778 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:19.778 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:19.778 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:19.778 16:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:20.035 nvme0n1 00:13:20.035 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:20.035 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:20.035 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.292 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.292 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.293 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.551 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 00:13:20.551 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.551 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.551 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.551 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:20.551 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:20.551 16:01:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:21.933 nvme0n1 00:13:21.933 16:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:21.933 16:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:21.933 16:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.933 16:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.933 16:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:21.933 16:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.933 16:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.934 16:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.934 16:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:21.934 16:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.934 16:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:22.192 16:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.192 16:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:13:22.192 16:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid ca768c1a-78f6-4242-8009-85e76e7a8123 -l 0 --dhchap-secret DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: --dhchap-ctrl-secret DHHC-1:03:NmIwOGEwYTBiOWIxYzA4MzhhNzY2M2M4Y2NlYzVkM2M5MWQ3MTYzNWRjMTEzZTYwMjVkN2I0ZDc1ZGM5ZmVlZJGp+3c=: 00:13:23.127 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:23.127 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:23.127 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:23.127 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:23.127 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:23.127 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:23.127 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:23.127 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.127 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.386 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:23.386 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:23.386 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:23.386 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:23.386 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.386 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:23.386 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:23.386 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:23.386 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:23.386 16:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:23.952 request: 00:13:23.952 { 00:13:23.952 "name": "nvme0", 00:13:23.952 "trtype": "tcp", 00:13:23.952 "traddr": "10.0.0.3", 00:13:23.952 "adrfam": "ipv4", 00:13:23.952 "trsvcid": "4420", 00:13:23.952 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:23.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123", 00:13:23.953 "prchk_reftag": false, 00:13:23.953 "prchk_guard": false, 00:13:23.953 "hdgst": false, 00:13:23.953 "ddgst": false, 00:13:23.953 "dhchap_key": "key1", 00:13:23.953 "allow_unrecognized_csi": false, 00:13:23.953 "method": "bdev_nvme_attach_controller", 00:13:23.953 "req_id": 1 00:13:23.953 } 00:13:23.953 Got JSON-RPC error response 00:13:23.953 response: 00:13:23.953 { 00:13:23.953 "code": -5, 00:13:23.953 "message": "Input/output error" 00:13:23.953 } 00:13:23.953 16:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:23.953 16:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:23.953 16:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:23.953 16:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:23.953 16:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:23.953 16:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:23.953 16:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:25.326 nvme0n1 00:13:25.326 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:25.326 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:25.326 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.326 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.326 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.326 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.584 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:25.584 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.584 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.584 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.584 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:25.584 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:25.584 16:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:25.842 nvme0n1 00:13:26.100 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:26.100 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.100 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: '' 2s 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: ]] 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGZiMTA1Y2NjZjhjN2NjZjM3MGFlMWJlOTg2Njc5YjNQszLY: 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:26.439 16:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: 2s 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: ]] 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmI3ODdlZGY5ODY3NjIwMzJkYWU3NmVkNTllNzI0MTk3NDRlZjk0OTQ4Y2UzMDM1yuannQ==: 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:28.969 16:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:30.870 16:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:31.805 nvme0n1 00:13:31.805 16:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:31.805 16:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.805 16:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.805 16:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.805 16:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:31.805 16:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:32.376 16:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:13:32.376 16:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.376 16:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:13:32.653 16:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.653 16:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:32.653 16:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.653 16:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.653 16:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.653 16:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:13:32.653 16:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:13:32.911 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:13:32.911 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.911 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:33.169 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:33.735 request: 00:13:33.735 { 00:13:33.735 "name": "nvme0", 00:13:33.735 "dhchap_key": "key1", 00:13:33.735 "dhchap_ctrlr_key": "key3", 00:13:33.735 "method": "bdev_nvme_set_keys", 00:13:33.735 "req_id": 1 00:13:33.735 } 00:13:33.735 Got JSON-RPC error response 00:13:33.735 response: 00:13:33.735 { 00:13:33.735 "code": -13, 00:13:33.735 "message": "Permission denied" 00:13:33.735 } 00:13:33.735 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:33.735 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:33.735 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:33.735 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:33.735 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:33.735 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.735 16:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:34.301 16:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:13:34.301 16:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:13:35.237 16:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:13:35.237 16:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:13:35.237 16:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.494 16:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:13:35.494 16:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:35.494 16:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.494 16:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.494 16:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.494 16:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:35.494 16:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:35.494 16:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:13:36.429 nvme0n1 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:36.429 16:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:13:37.362 request: 00:13:37.362 { 00:13:37.362 "name": "nvme0", 00:13:37.362 "dhchap_key": "key2", 00:13:37.362 "dhchap_ctrlr_key": "key0", 00:13:37.362 "method": "bdev_nvme_set_keys", 00:13:37.362 "req_id": 1 00:13:37.362 } 00:13:37.362 Got JSON-RPC error response 00:13:37.362 response: 00:13:37.362 { 00:13:37.362 "code": -13, 00:13:37.362 "message": "Permission denied" 00:13:37.362 } 00:13:37.362 16:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:37.362 16:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:37.362 16:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:37.362 16:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:37.362 16:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:37.362 16:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.362 16:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:37.362 16:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:13:37.362 16:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:13:38.736 16:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:13:38.736 16:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:13:38.736 16:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.736 16:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:13:38.736 16:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:13:38.736 16:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:13:38.736 16:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 67568 00:13:38.736 16:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 67568 ']' 00:13:38.736 16:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 67568 00:13:38.736 16:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:38.736 16:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.736 16:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67568 00:13:38.994 killing process with pid 67568 00:13:38.994 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:38.994 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:38.994 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67568' 00:13:38.994 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 67568 00:13:38.994 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 67568 00:13:39.252 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:39.252 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:39.252 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:13:39.252 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:39.252 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:13:39.252 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:39.252 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:39.252 rmmod nvme_tcp 00:13:39.252 rmmod nvme_fabrics 00:13:39.252 rmmod nvme_keyring 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70669 ']' 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70669 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 70669 ']' 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 70669 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70669 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70669' 00:13:39.510 killing process with pid 70669 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 70669 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 70669 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:39.510 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.tlQ /tmp/spdk.key-sha256.rmU /tmp/spdk.key-sha384.Og0 /tmp/spdk.key-sha512.fTI /tmp/spdk.key-sha512.yWm /tmp/spdk.key-sha384.qQf /tmp/spdk.key-sha256.3Yg '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:39.768 00:13:39.768 real 3m17.308s 00:13:39.768 user 7m52.310s 00:13:39.768 sys 0m30.532s 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.768 16:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.768 ************************************ 00:13:39.768 END TEST nvmf_auth_target 00:13:39.768 ************************************ 00:13:40.027 16:01:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:13:40.027 16:01:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:40.027 16:01:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:40.027 16:01:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.027 16:01:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.027 ************************************ 00:13:40.027 START TEST nvmf_bdevio_no_huge 00:13:40.027 ************************************ 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:40.028 * Looking for test storage... 00:13:40.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:40.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.028 --rc genhtml_branch_coverage=1 00:13:40.028 --rc genhtml_function_coverage=1 00:13:40.028 --rc genhtml_legend=1 00:13:40.028 --rc geninfo_all_blocks=1 00:13:40.028 --rc geninfo_unexecuted_blocks=1 00:13:40.028 00:13:40.028 ' 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:40.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.028 --rc genhtml_branch_coverage=1 00:13:40.028 --rc genhtml_function_coverage=1 00:13:40.028 --rc genhtml_legend=1 00:13:40.028 --rc geninfo_all_blocks=1 00:13:40.028 --rc geninfo_unexecuted_blocks=1 00:13:40.028 00:13:40.028 ' 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:40.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.028 --rc genhtml_branch_coverage=1 00:13:40.028 --rc genhtml_function_coverage=1 00:13:40.028 --rc genhtml_legend=1 00:13:40.028 --rc geninfo_all_blocks=1 00:13:40.028 --rc geninfo_unexecuted_blocks=1 00:13:40.028 00:13:40.028 ' 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:40.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:40.028 --rc genhtml_branch_coverage=1 00:13:40.028 --rc genhtml_function_coverage=1 00:13:40.028 --rc genhtml_legend=1 00:13:40.028 --rc geninfo_all_blocks=1 00:13:40.028 --rc geninfo_unexecuted_blocks=1 00:13:40.028 00:13:40.028 ' 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.028 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:40.029 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:40.029 Cannot find device "nvmf_init_br" 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:40.029 Cannot find device "nvmf_init_br2" 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:40.029 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:40.029 Cannot find device "nvmf_tgt_br" 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:40.288 Cannot find device "nvmf_tgt_br2" 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:40.288 Cannot find device "nvmf_init_br" 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:40.288 Cannot find device "nvmf_init_br2" 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:40.288 Cannot find device "nvmf_tgt_br" 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:40.288 Cannot find device "nvmf_tgt_br2" 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:40.288 Cannot find device "nvmf_br" 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:40.288 Cannot find device "nvmf_init_if" 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:40.288 Cannot find device "nvmf_init_if2" 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:40.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:40.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:40.288 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:40.547 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:40.547 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:13:40.547 00:13:40.547 --- 10.0.0.3 ping statistics --- 00:13:40.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.547 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:40.547 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:40.547 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:13:40.547 00:13:40.547 --- 10.0.0.4 ping statistics --- 00:13:40.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.547 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:40.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:40.547 00:13:40.547 --- 10.0.0.1 ping statistics --- 00:13:40.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.547 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:40.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:13:40.547 00:13:40.547 --- 10.0.0.2 ping statistics --- 00:13:40.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.547 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=71316 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 71316 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 71316 ']' 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.547 16:01:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:40.547 [2024-11-20 16:01:38.681782] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:13:40.547 [2024-11-20 16:01:38.681892] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:40.807 [2024-11-20 16:01:38.837002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.807 [2024-11-20 16:01:38.938506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.807 [2024-11-20 16:01:38.938608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.807 [2024-11-20 16:01:38.938628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.807 [2024-11-20 16:01:38.938642] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.807 [2024-11-20 16:01:38.938654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.807 [2024-11-20 16:01:38.939881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:40.807 [2024-11-20 16:01:38.939957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:40.807 [2024-11-20 16:01:38.940052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:40.807 [2024-11-20 16:01:38.940067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.807 [2024-11-20 16:01:38.946230] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:41.749 [2024-11-20 16:01:39.888055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:41.749 Malloc0 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.749 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:41.750 [2024-11-20 16:01:39.934908] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:41.750 { 00:13:41.750 "params": { 00:13:41.750 "name": "Nvme$subsystem", 00:13:41.750 "trtype": "$TEST_TRANSPORT", 00:13:41.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:41.750 "adrfam": "ipv4", 00:13:41.750 "trsvcid": "$NVMF_PORT", 00:13:41.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:41.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:41.750 "hdgst": ${hdgst:-false}, 00:13:41.750 "ddgst": ${ddgst:-false} 00:13:41.750 }, 00:13:41.750 "method": "bdev_nvme_attach_controller" 00:13:41.750 } 00:13:41.750 EOF 00:13:41.750 )") 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:13:41.750 16:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:41.750 "params": { 00:13:41.750 "name": "Nvme1", 00:13:41.750 "trtype": "tcp", 00:13:41.750 "traddr": "10.0.0.3", 00:13:41.750 "adrfam": "ipv4", 00:13:41.750 "trsvcid": "4420", 00:13:41.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:41.750 "hdgst": false, 00:13:41.750 "ddgst": false 00:13:41.750 }, 00:13:41.750 "method": "bdev_nvme_attach_controller" 00:13:41.750 }' 00:13:42.008 [2024-11-20 16:01:40.002942] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:13:42.008 [2024-11-20 16:01:40.003103] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid71352 ] 00:13:42.008 [2024-11-20 16:01:40.209737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:42.265 [2024-11-20 16:01:40.311788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.265 [2024-11-20 16:01:40.311977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:42.265 [2024-11-20 16:01:40.311992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.265 [2024-11-20 16:01:40.338525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:42.523 I/O targets: 00:13:42.523 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:42.523 00:13:42.523 00:13:42.523 CUnit - A unit testing framework for C - Version 2.1-3 00:13:42.523 http://cunit.sourceforge.net/ 00:13:42.523 00:13:42.523 00:13:42.523 Suite: bdevio tests on: Nvme1n1 00:13:42.523 Test: blockdev write read block ...passed 00:13:42.523 Test: blockdev write zeroes read block ...passed 00:13:42.523 Test: blockdev write zeroes read no split ...passed 00:13:42.523 Test: blockdev write zeroes read split ...passed 00:13:42.523 Test: blockdev write zeroes read split partial ...passed 00:13:42.523 Test: blockdev reset ...[2024-11-20 16:01:40.615749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:42.523 [2024-11-20 16:01:40.615952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x846310 (9): Bad file descriptor 00:13:42.523 [2024-11-20 16:01:40.629271] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:42.523 passed 00:13:42.523 Test: blockdev write read 8 blocks ...passed 00:13:42.523 Test: blockdev write read size > 128k ...passed 00:13:42.523 Test: blockdev write read invalid size ...passed 00:13:42.523 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:42.523 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:42.523 Test: blockdev write read max offset ...passed 00:13:42.523 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:42.523 Test: blockdev writev readv 8 blocks ...passed 00:13:42.523 Test: blockdev writev readv 30 x 1block ...passed 00:13:42.523 Test: blockdev writev readv block ...passed 00:13:42.523 Test: blockdev writev readv size > 128k ...passed 00:13:42.523 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:42.523 Test: blockdev comparev and writev ...[2024-11-20 16:01:40.640291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.523 [2024-11-20 16:01:40.640353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:42.523 [2024-11-20 16:01:40.640379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.523 [2024-11-20 16:01:40.640393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:42.523 [2024-11-20 16:01:40.640985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.523 [2024-11-20 16:01:40.641016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:42.523 [2024-11-20 16:01:40.641037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.523 [2024-11-20 16:01:40.641050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:42.523 [2024-11-20 16:01:40.641551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.523 [2024-11-20 16:01:40.641593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:42.523 [2024-11-20 16:01:40.641615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.523 [2024-11-20 16:01:40.641628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:42.523 [2024-11-20 16:01:40.642151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.523 [2024-11-20 16:01:40.642186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:42.523 [2024-11-20 16:01:40.642207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:42.523 [2024-11-20 16:01:40.642221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:42.523 passed 00:13:42.524 Test: blockdev nvme passthru rw ...passed 00:13:42.524 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:01:40.643386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.524 [2024-11-20 16:01:40.643426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:42.524 [2024-11-20 16:01:40.643589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.524 [2024-11-20 16:01:40.643608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:42.524 [2024-11-20 16:01:40.643759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.524 [2024-11-20 16:01:40.643787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:42.524 [2024-11-20 16:01:40.643966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:42.524 [2024-11-20 16:01:40.643995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:42.524 passed 00:13:42.524 Test: blockdev nvme admin passthru ...passed 00:13:42.524 Test: blockdev copy ...passed 00:13:42.524 00:13:42.524 Run Summary: Type Total Ran Passed Failed Inactive 00:13:42.524 suites 1 1 n/a 0 0 00:13:42.524 tests 23 23 23 0 0 00:13:42.524 asserts 152 152 152 0 n/a 00:13:42.524 00:13:42.524 Elapsed time = 0.174 seconds 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:43.088 rmmod nvme_tcp 00:13:43.088 rmmod nvme_fabrics 00:13:43.088 rmmod nvme_keyring 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 71316 ']' 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 71316 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 71316 ']' 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 71316 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71316 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:43.088 killing process with pid 71316 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71316' 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 71316 00:13:43.088 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 71316 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.653 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.912 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:13:43.912 00:13:43.912 real 0m3.881s 00:13:43.912 user 0m12.649s 00:13:43.912 sys 0m1.554s 00:13:43.912 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.912 ************************************ 00:13:43.912 END TEST nvmf_bdevio_no_huge 00:13:43.912 ************************************ 00:13:43.912 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:43.912 16:01:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:43.912 16:01:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:43.912 16:01:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.912 16:01:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:43.912 ************************************ 00:13:43.912 START TEST nvmf_tls 00:13:43.912 ************************************ 00:13:43.912 16:01:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:43.912 * Looking for test storage... 00:13:43.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:13:43.912 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:43.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.913 --rc genhtml_branch_coverage=1 00:13:43.913 --rc genhtml_function_coverage=1 00:13:43.913 --rc genhtml_legend=1 00:13:43.913 --rc geninfo_all_blocks=1 00:13:43.913 --rc geninfo_unexecuted_blocks=1 00:13:43.913 00:13:43.913 ' 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:43.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.913 --rc genhtml_branch_coverage=1 00:13:43.913 --rc genhtml_function_coverage=1 00:13:43.913 --rc genhtml_legend=1 00:13:43.913 --rc geninfo_all_blocks=1 00:13:43.913 --rc geninfo_unexecuted_blocks=1 00:13:43.913 00:13:43.913 ' 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:43.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.913 --rc genhtml_branch_coverage=1 00:13:43.913 --rc genhtml_function_coverage=1 00:13:43.913 --rc genhtml_legend=1 00:13:43.913 --rc geninfo_all_blocks=1 00:13:43.913 --rc geninfo_unexecuted_blocks=1 00:13:43.913 00:13:43.913 ' 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:43.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.913 --rc genhtml_branch_coverage=1 00:13:43.913 --rc genhtml_function_coverage=1 00:13:43.913 --rc genhtml_legend=1 00:13:43.913 --rc geninfo_all_blocks=1 00:13:43.913 --rc geninfo_unexecuted_blocks=1 00:13:43.913 00:13:43.913 ' 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.913 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.171 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:44.172 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:44.172 Cannot find device "nvmf_init_br" 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:44.172 Cannot find device "nvmf_init_br2" 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:44.172 Cannot find device "nvmf_tgt_br" 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:44.172 Cannot find device "nvmf_tgt_br2" 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:44.172 Cannot find device "nvmf_init_br" 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:44.172 Cannot find device "nvmf_init_br2" 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:44.172 Cannot find device "nvmf_tgt_br" 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:44.172 Cannot find device "nvmf_tgt_br2" 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:44.172 Cannot find device "nvmf_br" 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:44.172 Cannot find device "nvmf_init_if" 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:44.172 Cannot find device "nvmf_init_if2" 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:44.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:44.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:44.172 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:44.429 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:44.429 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:44.429 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:44.430 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:44.430 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:13:44.430 00:13:44.430 --- 10.0.0.3 ping statistics --- 00:13:44.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.430 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:44.430 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:44.430 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:13:44.430 00:13:44.430 --- 10.0.0.4 ping statistics --- 00:13:44.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.430 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:44.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:44.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:13:44.430 00:13:44.430 --- 10.0.0.1 ping statistics --- 00:13:44.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.430 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:44.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:44.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:44.430 00:13:44.430 --- 10.0.0.2 ping statistics --- 00:13:44.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:44.430 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71590 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71590 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71590 ']' 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.430 16:01:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:44.430 [2024-11-20 16:01:42.652087] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:13:44.430 [2024-11-20 16:01:42.652196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.687 [2024-11-20 16:01:42.801801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.687 [2024-11-20 16:01:42.864782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.687 [2024-11-20 16:01:42.864859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.687 [2024-11-20 16:01:42.864873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.687 [2024-11-20 16:01:42.864884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.687 [2024-11-20 16:01:42.864892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.687 [2024-11-20 16:01:42.865314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.620 16:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.620 16:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:13:45.620 16:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.620 16:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:45.620 16:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.620 16:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.620 16:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:13:45.620 16:01:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:45.879 true 00:13:45.879 16:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:45.879 16:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:13:46.136 16:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:13:46.136 16:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:13:46.136 16:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:46.395 16:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:46.395 16:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:13:46.653 16:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:13:46.653 16:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:13:46.653 16:01:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:47.218 16:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:47.218 16:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:13:47.476 16:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:13:47.476 16:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:13:47.476 16:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:47.476 16:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:13:47.734 16:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:13:47.734 16:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:13:47.734 16:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:47.992 16:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:13:47.992 16:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:48.558 16:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:13:48.558 16:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:13:48.558 16:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:48.863 16:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:48.863 16:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.yTNBZUXiSh 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.fawmtTv444 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.yTNBZUXiSh 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.fawmtTv444 00:13:49.143 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:49.711 16:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:49.969 [2024-11-20 16:01:47.996576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:49.969 16:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.yTNBZUXiSh 00:13:49.969 16:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yTNBZUXiSh 00:13:49.969 16:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:50.227 [2024-11-20 16:01:48.299037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.227 16:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:50.484 16:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:50.742 [2024-11-20 16:01:48.847171] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:50.742 [2024-11-20 16:01:48.847435] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:50.742 16:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:50.999 malloc0 00:13:50.999 16:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:51.262 16:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yTNBZUXiSh 00:13:51.518 16:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:51.775 16:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.yTNBZUXiSh 00:14:03.969 Initializing NVMe Controllers 00:14:03.969 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:03.969 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:03.969 Initialization complete. Launching workers. 00:14:03.969 ======================================================== 00:14:03.969 Latency(us) 00:14:03.969 Device Information : IOPS MiB/s Average min max 00:14:03.969 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8116.49 31.71 7887.50 1601.79 11649.61 00:14:03.969 ======================================================== 00:14:03.969 Total : 8116.49 31.71 7887.50 1601.79 11649.61 00:14:03.969 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yTNBZUXiSh 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yTNBZUXiSh 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71834 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71834 /var/tmp/bdevperf.sock 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71834 ']' 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:03.969 [2024-11-20 16:02:00.229782] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:03.969 [2024-11-20 16:02:00.229916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71834 ] 00:14:03.969 [2024-11-20 16:02:00.384022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.969 [2024-11-20 16:02:00.452910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.969 [2024-11-20 16:02:00.510593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yTNBZUXiSh 00:14:03.969 16:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:03.969 [2024-11-20 16:02:01.134164] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:03.969 TLSTESTn1 00:14:03.969 16:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:03.969 Running I/O for 10 seconds... 00:14:05.161 3328.00 IOPS, 13.00 MiB/s [2024-11-20T16:02:04.784Z] 3328.00 IOPS, 13.00 MiB/s [2024-11-20T16:02:05.717Z] 3370.67 IOPS, 13.17 MiB/s [2024-11-20T16:02:06.650Z] 3392.00 IOPS, 13.25 MiB/s [2024-11-20T16:02:07.584Z] 3398.40 IOPS, 13.28 MiB/s [2024-11-20T16:02:08.516Z] 3398.83 IOPS, 13.28 MiB/s [2024-11-20T16:02:09.448Z] 3401.14 IOPS, 13.29 MiB/s [2024-11-20T16:02:10.390Z] 3408.00 IOPS, 13.31 MiB/s [2024-11-20T16:02:11.789Z] 3413.33 IOPS, 13.33 MiB/s [2024-11-20T16:02:11.789Z] 3413.50 IOPS, 13.33 MiB/s 00:14:13.539 Latency(us) 00:14:13.539 [2024-11-20T16:02:11.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.539 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:13.539 Verification LBA range: start 0x0 length 0x2000 00:14:13.539 TLSTESTn1 : 10.03 3415.29 13.34 0.00 0.00 37389.13 8340.95 23116.33 00:14:13.539 [2024-11-20T16:02:11.789Z] =================================================================================================================== 00:14:13.539 [2024-11-20T16:02:11.789Z] Total : 3415.29 13.34 0.00 0.00 37389.13 8340.95 23116.33 00:14:13.539 { 00:14:13.539 "results": [ 00:14:13.539 { 00:14:13.539 "job": "TLSTESTn1", 00:14:13.539 "core_mask": "0x4", 00:14:13.539 "workload": "verify", 00:14:13.539 "status": "finished", 00:14:13.539 "verify_range": { 00:14:13.539 "start": 0, 00:14:13.539 "length": 8192 00:14:13.539 }, 00:14:13.539 "queue_depth": 128, 00:14:13.539 "io_size": 4096, 00:14:13.539 "runtime": 10.031949, 00:14:13.539 "iops": 3415.288494788002, 00:14:13.539 "mibps": 13.340970682765633, 00:14:13.539 "io_failed": 0, 00:14:13.539 "io_timeout": 0, 00:14:13.539 "avg_latency_us": 37389.13293136844, 00:14:13.539 "min_latency_us": 8340.945454545454, 00:14:13.539 "max_latency_us": 23116.334545454545 00:14:13.539 } 00:14:13.539 ], 00:14:13.539 "core_count": 1 00:14:13.539 } 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71834 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71834 ']' 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71834 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71834 00:14:13.539 killing process with pid 71834 00:14:13.539 Received shutdown signal, test time was about 10.000000 seconds 00:14:13.539 00:14:13.539 Latency(us) 00:14:13.539 [2024-11-20T16:02:11.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.539 [2024-11-20T16:02:11.789Z] =================================================================================================================== 00:14:13.539 [2024-11-20T16:02:11.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71834' 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71834 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71834 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fawmtTv444 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fawmtTv444 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:13.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fawmtTv444 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fawmtTv444 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71962 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71962 /var/tmp/bdevperf.sock 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71962 ']' 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.539 16:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.539 [2024-11-20 16:02:11.718883] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:13.539 [2024-11-20 16:02:11.720241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71962 ] 00:14:13.797 [2024-11-20 16:02:11.878341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.797 [2024-11-20 16:02:11.950465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.797 [2024-11-20 16:02:12.007992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:14.054 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:14.055 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:14.055 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fawmtTv444 00:14:14.312 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:14.569 [2024-11-20 16:02:12.582589] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:14.569 [2024-11-20 16:02:12.589788] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:14.569 [2024-11-20 16:02:12.590247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffcfb0 (107): Transport endpoint is not connected 00:14:14.569 [2024-11-20 16:02:12.591221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffcfb0 (9): Bad file descriptor 00:14:14.569 [2024-11-20 16:02:12.592214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:14.569 [2024-11-20 16:02:12.592404] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:14.569 [2024-11-20 16:02:12.592429] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:14.569 [2024-11-20 16:02:12.592457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:14.569 request: 00:14:14.569 { 00:14:14.569 "name": "TLSTEST", 00:14:14.569 "trtype": "tcp", 00:14:14.569 "traddr": "10.0.0.3", 00:14:14.569 "adrfam": "ipv4", 00:14:14.569 "trsvcid": "4420", 00:14:14.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:14.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:14.569 "prchk_reftag": false, 00:14:14.569 "prchk_guard": false, 00:14:14.569 "hdgst": false, 00:14:14.569 "ddgst": false, 00:14:14.569 "psk": "key0", 00:14:14.569 "allow_unrecognized_csi": false, 00:14:14.569 "method": "bdev_nvme_attach_controller", 00:14:14.569 "req_id": 1 00:14:14.569 } 00:14:14.569 Got JSON-RPC error response 00:14:14.569 response: 00:14:14.569 { 00:14:14.569 "code": -5, 00:14:14.569 "message": "Input/output error" 00:14:14.569 } 00:14:14.569 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71962 00:14:14.569 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71962 ']' 00:14:14.569 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71962 00:14:14.569 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:14.569 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.569 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71962 00:14:14.569 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:14.570 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:14.570 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71962' 00:14:14.570 killing process with pid 71962 00:14:14.570 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71962 00:14:14.570 Received shutdown signal, test time was about 10.000000 seconds 00:14:14.570 00:14:14.570 Latency(us) 00:14:14.570 [2024-11-20T16:02:12.820Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.570 [2024-11-20T16:02:12.820Z] =================================================================================================================== 00:14:14.570 [2024-11-20T16:02:12.820Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:14.570 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71962 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yTNBZUXiSh 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yTNBZUXiSh 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:14.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.yTNBZUXiSh 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yTNBZUXiSh 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71983 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71983 /var/tmp/bdevperf.sock 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 71983 ']' 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.828 16:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.828 [2024-11-20 16:02:12.910594] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:14.828 [2024-11-20 16:02:12.910927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71983 ] 00:14:14.828 [2024-11-20 16:02:13.054171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.085 [2024-11-20 16:02:13.132799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:15.085 [2024-11-20 16:02:13.189624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:16.017 16:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.018 16:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:16.018 16:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yTNBZUXiSh 00:14:16.018 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:16.275 [2024-11-20 16:02:14.506473] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:16.275 [2024-11-20 16:02:14.516861] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:16.275 [2024-11-20 16:02:14.517233] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:16.275 [2024-11-20 16:02:14.517445] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-11-20 16:02:14.517544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88fb0 (107): Transport endpoint is not connected 00:14:16.275 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:16.275 [2024-11-20 16:02:14.518530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd88fb0 (9): Bad file descriptor 00:14:16.275 [2024-11-20 16:02:14.519529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:16.275 [2024-11-20 16:02:14.519555] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:16.275 [2024-11-20 16:02:14.519568] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:16.275 [2024-11-20 16:02:14.519585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:16.275 request: 00:14:16.275 { 00:14:16.275 "name": "TLSTEST", 00:14:16.275 "trtype": "tcp", 00:14:16.275 "traddr": "10.0.0.3", 00:14:16.275 "adrfam": "ipv4", 00:14:16.275 "trsvcid": "4420", 00:14:16.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:16.275 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:16.275 "prchk_reftag": false, 00:14:16.275 "prchk_guard": false, 00:14:16.275 "hdgst": false, 00:14:16.275 "ddgst": false, 00:14:16.275 "psk": "key0", 00:14:16.275 "allow_unrecognized_csi": false, 00:14:16.275 "method": "bdev_nvme_attach_controller", 00:14:16.275 "req_id": 1 00:14:16.275 } 00:14:16.275 Got JSON-RPC error response 00:14:16.275 response: 00:14:16.275 { 00:14:16.275 "code": -5, 00:14:16.275 "message": "Input/output error" 00:14:16.275 } 00:14:16.533 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71983 00:14:16.533 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71983 ']' 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71983 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71983 00:14:16.534 killing process with pid 71983 00:14:16.534 Received shutdown signal, test time was about 10.000000 seconds 00:14:16.534 00:14:16.534 Latency(us) 00:14:16.534 [2024-11-20T16:02:14.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.534 [2024-11-20T16:02:14.784Z] =================================================================================================================== 00:14:16.534 [2024-11-20T16:02:14.784Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71983' 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71983 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71983 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yTNBZUXiSh 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yTNBZUXiSh 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:16.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.yTNBZUXiSh 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yTNBZUXiSh 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72017 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72017 /var/tmp/bdevperf.sock 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72017 ']' 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:16.534 16:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.791 [2024-11-20 16:02:14.824650] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:16.791 [2024-11-20 16:02:14.825009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72017 ] 00:14:16.791 [2024-11-20 16:02:14.970204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.791 [2024-11-20 16:02:15.036190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.049 [2024-11-20 16:02:15.091549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:17.049 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.049 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:17.049 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yTNBZUXiSh 00:14:17.307 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:17.566 [2024-11-20 16:02:15.668207] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:17.566 [2024-11-20 16:02:15.677067] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:17.566 [2024-11-20 16:02:15.677314] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:17.566 [2024-11-20 16:02:15.677385] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:17.566 [2024-11-20 16:02:15.678235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2123fb0 (107): Transport endpoint is not connected 00:14:17.566 [2024-11-20 16:02:15.679224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2123fb0 (9): Bad file descriptor 00:14:17.566 [2024-11-20 16:02:15.680219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:17.566 [2024-11-20 16:02:15.680246] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:17.566 [2024-11-20 16:02:15.680258] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:17.566 [2024-11-20 16:02:15.680276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:17.566 request: 00:14:17.566 { 00:14:17.566 "name": "TLSTEST", 00:14:17.566 "trtype": "tcp", 00:14:17.566 "traddr": "10.0.0.3", 00:14:17.566 "adrfam": "ipv4", 00:14:17.566 "trsvcid": "4420", 00:14:17.566 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:17.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:17.566 "prchk_reftag": false, 00:14:17.566 "prchk_guard": false, 00:14:17.566 "hdgst": false, 00:14:17.566 "ddgst": false, 00:14:17.566 "psk": "key0", 00:14:17.566 "allow_unrecognized_csi": false, 00:14:17.566 "method": "bdev_nvme_attach_controller", 00:14:17.566 "req_id": 1 00:14:17.566 } 00:14:17.566 Got JSON-RPC error response 00:14:17.566 response: 00:14:17.566 { 00:14:17.566 "code": -5, 00:14:17.566 "message": "Input/output error" 00:14:17.566 } 00:14:17.566 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72017 00:14:17.566 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72017 ']' 00:14:17.566 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72017 00:14:17.566 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:17.566 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.566 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72017 00:14:17.566 killing process with pid 72017 00:14:17.566 Received shutdown signal, test time was about 10.000000 seconds 00:14:17.566 00:14:17.566 Latency(us) 00:14:17.566 [2024-11-20T16:02:15.816Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.566 [2024-11-20T16:02:15.816Z] =================================================================================================================== 00:14:17.566 [2024-11-20T16:02:15.816Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:17.566 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:17.566 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:17.566 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72017' 00:14:17.566 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72017 00:14:17.566 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72017 00:14:17.823 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:17.823 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:17.823 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:17.823 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:17.823 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:17.823 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:17.823 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:17.823 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:17.823 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:17.823 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.823 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:17.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.823 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72038 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72038 /var/tmp/bdevperf.sock 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72038 ']' 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.824 16:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:17.824 [2024-11-20 16:02:15.989066] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:17.824 [2024-11-20 16:02:15.989380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72038 ] 00:14:18.082 [2024-11-20 16:02:16.135460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.082 [2024-11-20 16:02:16.201785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.082 [2024-11-20 16:02:16.256932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.015 16:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:19.015 16:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:19.015 16:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:19.273 [2024-11-20 16:02:17.277050] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:19.273 [2024-11-20 16:02:17.277347] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:19.273 request: 00:14:19.273 { 00:14:19.273 "name": "key0", 00:14:19.273 "path": "", 00:14:19.273 "method": "keyring_file_add_key", 00:14:19.273 "req_id": 1 00:14:19.273 } 00:14:19.273 Got JSON-RPC error response 00:14:19.273 response: 00:14:19.273 { 00:14:19.273 "code": -1, 00:14:19.273 "message": "Operation not permitted" 00:14:19.273 } 00:14:19.273 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:19.532 [2024-11-20 16:02:17.593251] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.532 [2024-11-20 16:02:17.593565] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:19.532 request: 00:14:19.532 { 00:14:19.532 "name": "TLSTEST", 00:14:19.532 "trtype": "tcp", 00:14:19.532 "traddr": "10.0.0.3", 00:14:19.532 "adrfam": "ipv4", 00:14:19.532 "trsvcid": "4420", 00:14:19.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:19.532 "prchk_reftag": false, 00:14:19.532 "prchk_guard": false, 00:14:19.532 "hdgst": false, 00:14:19.532 "ddgst": false, 00:14:19.532 "psk": "key0", 00:14:19.532 "allow_unrecognized_csi": false, 00:14:19.532 "method": "bdev_nvme_attach_controller", 00:14:19.532 "req_id": 1 00:14:19.532 } 00:14:19.532 Got JSON-RPC error response 00:14:19.532 response: 00:14:19.532 { 00:14:19.532 "code": -126, 00:14:19.533 "message": "Required key not available" 00:14:19.533 } 00:14:19.533 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72038 00:14:19.533 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72038 ']' 00:14:19.533 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72038 00:14:19.533 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:19.533 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.533 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72038 00:14:19.533 killing process with pid 72038 00:14:19.533 Received shutdown signal, test time was about 10.000000 seconds 00:14:19.533 00:14:19.533 Latency(us) 00:14:19.533 [2024-11-20T16:02:17.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.533 [2024-11-20T16:02:17.783Z] =================================================================================================================== 00:14:19.533 [2024-11-20T16:02:17.783Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:19.533 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:19.533 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:19.533 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72038' 00:14:19.533 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72038 00:14:19.533 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72038 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71590 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 71590 ']' 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 71590 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71590 00:14:19.791 killing process with pid 71590 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71590' 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 71590 00:14:19.791 16:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 71590 00:14:20.048 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:20.048 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:20.048 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:20.048 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.DlhSamf6Nn 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.DlhSamf6Nn 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72082 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72082 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72082 ']' 00:14:20.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.049 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.049 [2024-11-20 16:02:18.201552] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:20.049 [2024-11-20 16:02:18.201790] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.306 [2024-11-20 16:02:18.341518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.307 [2024-11-20 16:02:18.404187] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.307 [2024-11-20 16:02:18.404485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.307 [2024-11-20 16:02:18.404724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.307 [2024-11-20 16:02:18.404739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.307 [2024-11-20 16:02:18.404747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.307 [2024-11-20 16:02:18.405213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.307 [2024-11-20 16:02:18.460362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:20.307 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.307 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:20.307 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:20.307 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:20.307 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.564 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.564 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.DlhSamf6Nn 00:14:20.564 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DlhSamf6Nn 00:14:20.564 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:20.822 [2024-11-20 16:02:18.860316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.822 16:02:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:21.079 16:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:21.338 [2024-11-20 16:02:19.368441] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:21.338 [2024-11-20 16:02:19.368982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:21.338 16:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:21.595 malloc0 00:14:21.595 16:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:21.852 16:02:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DlhSamf6Nn 00:14:22.122 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DlhSamf6Nn 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DlhSamf6Nn 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72136 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72136 /var/tmp/bdevperf.sock 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72136 ']' 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.418 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.418 [2024-11-20 16:02:20.492919] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:22.418 [2024-11-20 16:02:20.493278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72136 ] 00:14:22.418 [2024-11-20 16:02:20.640097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.676 [2024-11-20 16:02:20.712787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.676 [2024-11-20 16:02:20.770460] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:22.676 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.676 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:22.676 16:02:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DlhSamf6Nn 00:14:22.935 16:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:23.193 [2024-11-20 16:02:21.419369] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:23.451 TLSTESTn1 00:14:23.451 16:02:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:23.451 Running I/O for 10 seconds... 00:14:25.759 3200.00 IOPS, 12.50 MiB/s [2024-11-20T16:02:24.943Z] 3216.00 IOPS, 12.56 MiB/s [2024-11-20T16:02:25.932Z] 3236.00 IOPS, 12.64 MiB/s [2024-11-20T16:02:26.867Z] 3237.00 IOPS, 12.64 MiB/s [2024-11-20T16:02:27.801Z] 3273.20 IOPS, 12.79 MiB/s [2024-11-20T16:02:28.735Z] 3234.33 IOPS, 12.63 MiB/s [2024-11-20T16:02:29.670Z] 3200.00 IOPS, 12.50 MiB/s [2024-11-20T16:02:31.042Z] 3207.38 IOPS, 12.53 MiB/s [2024-11-20T16:02:31.974Z] 3217.22 IOPS, 12.57 MiB/s [2024-11-20T16:02:31.974Z] 3212.80 IOPS, 12.55 MiB/s 00:14:33.724 Latency(us) 00:14:33.724 [2024-11-20T16:02:31.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.724 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:33.724 Verification LBA range: start 0x0 length 0x2000 00:14:33.725 TLSTESTn1 : 10.03 3216.55 12.56 0.00 0.00 39715.64 11081.54 34555.35 00:14:33.725 [2024-11-20T16:02:31.975Z] =================================================================================================================== 00:14:33.725 [2024-11-20T16:02:31.975Z] Total : 3216.55 12.56 0.00 0.00 39715.64 11081.54 34555.35 00:14:33.725 { 00:14:33.725 "results": [ 00:14:33.725 { 00:14:33.725 "job": "TLSTESTn1", 00:14:33.725 "core_mask": "0x4", 00:14:33.725 "workload": "verify", 00:14:33.725 "status": "finished", 00:14:33.725 "verify_range": { 00:14:33.725 "start": 0, 00:14:33.725 "length": 8192 00:14:33.725 }, 00:14:33.725 "queue_depth": 128, 00:14:33.725 "io_size": 4096, 00:14:33.725 "runtime": 10.028147, 00:14:33.725 "iops": 3216.546386884835, 00:14:33.725 "mibps": 12.564634323768887, 00:14:33.725 "io_failed": 0, 00:14:33.725 "io_timeout": 0, 00:14:33.725 "avg_latency_us": 39715.63867243867, 00:14:33.725 "min_latency_us": 11081.541818181819, 00:14:33.725 "max_latency_us": 34555.34545454545 00:14:33.725 } 00:14:33.725 ], 00:14:33.725 "core_count": 1 00:14:33.725 } 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 72136 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72136 ']' 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72136 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72136 00:14:33.725 killing process with pid 72136 00:14:33.725 Received shutdown signal, test time was about 10.000000 seconds 00:14:33.725 00:14:33.725 Latency(us) 00:14:33.725 [2024-11-20T16:02:31.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.725 [2024-11-20T16:02:31.975Z] =================================================================================================================== 00:14:33.725 [2024-11-20T16:02:31.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72136' 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72136 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72136 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.DlhSamf6Nn 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DlhSamf6Nn 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DlhSamf6Nn 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DlhSamf6Nn 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DlhSamf6Nn 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=72265 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:33.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 72265 /var/tmp/bdevperf.sock 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72265 ']' 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.725 16:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.983 [2024-11-20 16:02:31.976642] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:33.983 [2024-11-20 16:02:31.976787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72265 ] 00:14:33.983 [2024-11-20 16:02:32.124615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.983 [2024-11-20 16:02:32.187998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:34.242 [2024-11-20 16:02:32.241709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:34.242 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.242 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:34.242 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DlhSamf6Nn 00:14:34.553 [2024-11-20 16:02:32.568736] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DlhSamf6Nn': 0100666 00:14:34.553 [2024-11-20 16:02:32.568797] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:34.553 request: 00:14:34.553 { 00:14:34.553 "name": "key0", 00:14:34.553 "path": "/tmp/tmp.DlhSamf6Nn", 00:14:34.553 "method": "keyring_file_add_key", 00:14:34.553 "req_id": 1 00:14:34.553 } 00:14:34.553 Got JSON-RPC error response 00:14:34.553 response: 00:14:34.553 { 00:14:34.553 "code": -1, 00:14:34.553 "message": "Operation not permitted" 00:14:34.553 } 00:14:34.553 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:34.826 [2024-11-20 16:02:32.832927] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:34.826 [2024-11-20 16:02:32.833269] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:34.826 request: 00:14:34.826 { 00:14:34.826 "name": "TLSTEST", 00:14:34.826 "trtype": "tcp", 00:14:34.826 "traddr": "10.0.0.3", 00:14:34.826 "adrfam": "ipv4", 00:14:34.826 "trsvcid": "4420", 00:14:34.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.826 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:34.826 "prchk_reftag": false, 00:14:34.826 "prchk_guard": false, 00:14:34.826 "hdgst": false, 00:14:34.826 "ddgst": false, 00:14:34.826 "psk": "key0", 00:14:34.826 "allow_unrecognized_csi": false, 00:14:34.826 "method": "bdev_nvme_attach_controller", 00:14:34.826 "req_id": 1 00:14:34.826 } 00:14:34.826 Got JSON-RPC error response 00:14:34.826 response: 00:14:34.826 { 00:14:34.826 "code": -126, 00:14:34.826 "message": "Required key not available" 00:14:34.826 } 00:14:34.826 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 72265 00:14:34.826 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72265 ']' 00:14:34.826 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72265 00:14:34.826 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:34.826 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.826 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72265 00:14:34.826 killing process with pid 72265 00:14:34.826 Received shutdown signal, test time was about 10.000000 seconds 00:14:34.826 00:14:34.826 Latency(us) 00:14:34.826 [2024-11-20T16:02:33.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.826 [2024-11-20T16:02:33.076Z] =================================================================================================================== 00:14:34.826 [2024-11-20T16:02:33.076Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:34.826 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:34.826 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:34.826 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72265' 00:14:34.826 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72265 00:14:34.827 16:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72265 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 72082 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72082 ']' 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72082 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72082 00:14:35.084 killing process with pid 72082 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72082' 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72082 00:14:35.084 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72082 00:14:35.085 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:14:35.085 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:35.085 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:35.085 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.343 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72291 00:14:35.343 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:35.343 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72291 00:14:35.343 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72291 ']' 00:14:35.343 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.343 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.343 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.343 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.343 16:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.343 [2024-11-20 16:02:33.384703] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:35.343 [2024-11-20 16:02:33.385125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.343 [2024-11-20 16:02:33.532176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.602 [2024-11-20 16:02:33.595368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.602 [2024-11-20 16:02:33.595731] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.602 [2024-11-20 16:02:33.595923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.602 [2024-11-20 16:02:33.596112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.602 [2024-11-20 16:02:33.596158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.602 [2024-11-20 16:02:33.596722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.602 [2024-11-20 16:02:33.651525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:36.169 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.169 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:36.169 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:36.169 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:36.169 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.429 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.429 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.DlhSamf6Nn 00:14:36.429 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:36.429 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.DlhSamf6Nn 00:14:36.429 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:14:36.429 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.429 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:14:36.429 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.429 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.DlhSamf6Nn 00:14:36.429 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DlhSamf6Nn 00:14:36.429 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:36.687 [2024-11-20 16:02:34.700647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.687 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:36.945 16:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:37.203 [2024-11-20 16:02:35.232775] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:37.203 [2024-11-20 16:02:35.233339] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:37.203 16:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:37.461 malloc0 00:14:37.461 16:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:37.719 16:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DlhSamf6Nn 00:14:37.977 [2024-11-20 16:02:36.084217] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DlhSamf6Nn': 0100666 00:14:37.977 [2024-11-20 16:02:36.084492] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:37.977 request: 00:14:37.977 { 00:14:37.977 "name": "key0", 00:14:37.977 "path": "/tmp/tmp.DlhSamf6Nn", 00:14:37.977 "method": "keyring_file_add_key", 00:14:37.977 "req_id": 1 00:14:37.977 } 00:14:37.977 Got JSON-RPC error response 00:14:37.977 response: 00:14:37.977 { 00:14:37.977 "code": -1, 00:14:37.977 "message": "Operation not permitted" 00:14:37.977 } 00:14:37.977 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:38.235 [2024-11-20 16:02:36.348322] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:14:38.235 [2024-11-20 16:02:36.348429] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:38.235 request: 00:14:38.235 { 00:14:38.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:38.236 "host": "nqn.2016-06.io.spdk:host1", 00:14:38.236 "psk": "key0", 00:14:38.236 "method": "nvmf_subsystem_add_host", 00:14:38.236 "req_id": 1 00:14:38.236 } 00:14:38.236 Got JSON-RPC error response 00:14:38.236 response: 00:14:38.236 { 00:14:38.236 "code": -32603, 00:14:38.236 "message": "Internal error" 00:14:38.236 } 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 72291 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72291 ']' 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72291 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72291 00:14:38.236 killing process with pid 72291 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72291' 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72291 00:14:38.236 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72291 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.DlhSamf6Nn 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72366 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72366 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72366 ']' 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.493 16:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.493 [2024-11-20 16:02:36.679990] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:38.493 [2024-11-20 16:02:36.680108] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.752 [2024-11-20 16:02:36.830890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.752 [2024-11-20 16:02:36.900739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.752 [2024-11-20 16:02:36.900838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.752 [2024-11-20 16:02:36.900863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.752 [2024-11-20 16:02:36.900879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.752 [2024-11-20 16:02:36.900893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.752 [2024-11-20 16:02:36.901433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.752 [2024-11-20 16:02:36.959116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:39.010 16:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.010 16:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:39.010 16:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:39.010 16:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:39.010 16:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.010 16:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.010 16:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.DlhSamf6Nn 00:14:39.010 16:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DlhSamf6Nn 00:14:39.010 16:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:39.269 [2024-11-20 16:02:37.306356] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.269 16:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:39.527 16:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:39.784 [2024-11-20 16:02:37.918492] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:39.784 [2024-11-20 16:02:37.918848] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:39.784 16:02:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:40.041 malloc0 00:14:40.041 16:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:40.298 16:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DlhSamf6Nn 00:14:40.864 16:02:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:40.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:40.864 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=72420 00:14:40.864 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:40.864 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:40.864 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 72420 /var/tmp/bdevperf.sock 00:14:40.864 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72420 ']' 00:14:40.865 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:40.865 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.865 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:40.865 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.865 16:02:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.121 [2024-11-20 16:02:39.127168] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:41.121 [2024-11-20 16:02:39.127271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72420 ] 00:14:41.121 [2024-11-20 16:02:39.268652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.121 [2024-11-20 16:02:39.334032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.378 [2024-11-20 16:02:39.388379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:42.309 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.309 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:42.309 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DlhSamf6Nn 00:14:42.309 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:42.567 [2024-11-20 16:02:40.729057] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.567 TLSTESTn1 00:14:42.825 16:02:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:43.083 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:14:43.083 "subsystems": [ 00:14:43.083 { 00:14:43.083 "subsystem": "keyring", 00:14:43.083 "config": [ 00:14:43.083 { 00:14:43.083 "method": "keyring_file_add_key", 00:14:43.083 "params": { 00:14:43.083 "name": "key0", 00:14:43.083 "path": "/tmp/tmp.DlhSamf6Nn" 00:14:43.083 } 00:14:43.083 } 00:14:43.083 ] 00:14:43.083 }, 00:14:43.083 { 00:14:43.083 "subsystem": "iobuf", 00:14:43.083 "config": [ 00:14:43.083 { 00:14:43.083 "method": "iobuf_set_options", 00:14:43.083 "params": { 00:14:43.083 "small_pool_count": 8192, 00:14:43.083 "large_pool_count": 1024, 00:14:43.083 "small_bufsize": 8192, 00:14:43.083 "large_bufsize": 135168, 00:14:43.083 "enable_numa": false 00:14:43.083 } 00:14:43.083 } 00:14:43.083 ] 00:14:43.083 }, 00:14:43.083 { 00:14:43.083 "subsystem": "sock", 00:14:43.083 "config": [ 00:14:43.083 { 00:14:43.083 "method": "sock_set_default_impl", 00:14:43.083 "params": { 00:14:43.083 "impl_name": "uring" 00:14:43.083 } 00:14:43.083 }, 00:14:43.083 { 00:14:43.083 "method": "sock_impl_set_options", 00:14:43.083 "params": { 00:14:43.083 "impl_name": "ssl", 00:14:43.083 "recv_buf_size": 4096, 00:14:43.083 "send_buf_size": 4096, 00:14:43.083 "enable_recv_pipe": true, 00:14:43.083 "enable_quickack": false, 00:14:43.083 "enable_placement_id": 0, 00:14:43.083 "enable_zerocopy_send_server": true, 00:14:43.083 "enable_zerocopy_send_client": false, 00:14:43.083 "zerocopy_threshold": 0, 00:14:43.083 "tls_version": 0, 00:14:43.083 "enable_ktls": false 00:14:43.083 } 00:14:43.083 }, 00:14:43.083 { 00:14:43.083 "method": "sock_impl_set_options", 00:14:43.083 "params": { 00:14:43.083 "impl_name": "posix", 00:14:43.083 "recv_buf_size": 2097152, 00:14:43.083 "send_buf_size": 2097152, 00:14:43.083 "enable_recv_pipe": true, 00:14:43.083 "enable_quickack": false, 00:14:43.083 "enable_placement_id": 0, 00:14:43.083 "enable_zerocopy_send_server": true, 00:14:43.083 "enable_zerocopy_send_client": false, 00:14:43.083 "zerocopy_threshold": 0, 00:14:43.083 "tls_version": 0, 00:14:43.083 "enable_ktls": false 00:14:43.083 } 00:14:43.083 }, 00:14:43.083 { 00:14:43.083 "method": "sock_impl_set_options", 00:14:43.083 "params": { 00:14:43.083 "impl_name": "uring", 00:14:43.083 "recv_buf_size": 2097152, 00:14:43.083 "send_buf_size": 2097152, 00:14:43.083 "enable_recv_pipe": true, 00:14:43.083 "enable_quickack": false, 00:14:43.083 "enable_placement_id": 0, 00:14:43.083 "enable_zerocopy_send_server": false, 00:14:43.083 "enable_zerocopy_send_client": false, 00:14:43.083 "zerocopy_threshold": 0, 00:14:43.083 "tls_version": 0, 00:14:43.083 "enable_ktls": false 00:14:43.083 } 00:14:43.083 } 00:14:43.083 ] 00:14:43.083 }, 00:14:43.083 { 00:14:43.083 "subsystem": "vmd", 00:14:43.083 "config": [] 00:14:43.083 }, 00:14:43.083 { 00:14:43.083 "subsystem": "accel", 00:14:43.083 "config": [ 00:14:43.083 { 00:14:43.083 "method": "accel_set_options", 00:14:43.083 "params": { 00:14:43.083 "small_cache_size": 128, 00:14:43.083 "large_cache_size": 16, 00:14:43.083 "task_count": 2048, 00:14:43.083 "sequence_count": 2048, 00:14:43.083 "buf_count": 2048 00:14:43.083 } 00:14:43.083 } 00:14:43.083 ] 00:14:43.083 }, 00:14:43.083 { 00:14:43.083 "subsystem": "bdev", 00:14:43.083 "config": [ 00:14:43.083 { 00:14:43.083 "method": "bdev_set_options", 00:14:43.083 "params": { 00:14:43.083 "bdev_io_pool_size": 65535, 00:14:43.083 "bdev_io_cache_size": 256, 00:14:43.083 "bdev_auto_examine": true, 00:14:43.083 "iobuf_small_cache_size": 128, 00:14:43.083 "iobuf_large_cache_size": 16 00:14:43.083 } 00:14:43.083 }, 00:14:43.083 { 00:14:43.083 "method": "bdev_raid_set_options", 00:14:43.083 "params": { 00:14:43.083 "process_window_size_kb": 1024, 00:14:43.084 "process_max_bandwidth_mb_sec": 0 00:14:43.084 } 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "method": "bdev_iscsi_set_options", 00:14:43.084 "params": { 00:14:43.084 "timeout_sec": 30 00:14:43.084 } 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "method": "bdev_nvme_set_options", 00:14:43.084 "params": { 00:14:43.084 "action_on_timeout": "none", 00:14:43.084 "timeout_us": 0, 00:14:43.084 "timeout_admin_us": 0, 00:14:43.084 "keep_alive_timeout_ms": 10000, 00:14:43.084 "arbitration_burst": 0, 00:14:43.084 "low_priority_weight": 0, 00:14:43.084 "medium_priority_weight": 0, 00:14:43.084 "high_priority_weight": 0, 00:14:43.084 "nvme_adminq_poll_period_us": 10000, 00:14:43.084 "nvme_ioq_poll_period_us": 0, 00:14:43.084 "io_queue_requests": 0, 00:14:43.084 "delay_cmd_submit": true, 00:14:43.084 "transport_retry_count": 4, 00:14:43.084 "bdev_retry_count": 3, 00:14:43.084 "transport_ack_timeout": 0, 00:14:43.084 "ctrlr_loss_timeout_sec": 0, 00:14:43.084 "reconnect_delay_sec": 0, 00:14:43.084 "fast_io_fail_timeout_sec": 0, 00:14:43.084 "disable_auto_failback": false, 00:14:43.084 "generate_uuids": false, 00:14:43.084 "transport_tos": 0, 00:14:43.084 "nvme_error_stat": false, 00:14:43.084 "rdma_srq_size": 0, 00:14:43.084 "io_path_stat": false, 00:14:43.084 "allow_accel_sequence": false, 00:14:43.084 "rdma_max_cq_size": 0, 00:14:43.084 "rdma_cm_event_timeout_ms": 0, 00:14:43.084 "dhchap_digests": [ 00:14:43.084 "sha256", 00:14:43.084 "sha384", 00:14:43.084 "sha512" 00:14:43.084 ], 00:14:43.084 "dhchap_dhgroups": [ 00:14:43.084 "null", 00:14:43.084 "ffdhe2048", 00:14:43.084 "ffdhe3072", 00:14:43.084 "ffdhe4096", 00:14:43.084 "ffdhe6144", 00:14:43.084 "ffdhe8192" 00:14:43.084 ] 00:14:43.084 } 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "method": "bdev_nvme_set_hotplug", 00:14:43.084 "params": { 00:14:43.084 "period_us": 100000, 00:14:43.084 "enable": false 00:14:43.084 } 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "method": "bdev_malloc_create", 00:14:43.084 "params": { 00:14:43.084 "name": "malloc0", 00:14:43.084 "num_blocks": 8192, 00:14:43.084 "block_size": 4096, 00:14:43.084 "physical_block_size": 4096, 00:14:43.084 "uuid": "1e40c7fe-dee7-4df2-9f67-29d8761af212", 00:14:43.084 "optimal_io_boundary": 0, 00:14:43.084 "md_size": 0, 00:14:43.084 "dif_type": 0, 00:14:43.084 "dif_is_head_of_md": false, 00:14:43.084 "dif_pi_format": 0 00:14:43.084 } 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "method": "bdev_wait_for_examine" 00:14:43.084 } 00:14:43.084 ] 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "subsystem": "nbd", 00:14:43.084 "config": [] 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "subsystem": "scheduler", 00:14:43.084 "config": [ 00:14:43.084 { 00:14:43.084 "method": "framework_set_scheduler", 00:14:43.084 "params": { 00:14:43.084 "name": "static" 00:14:43.084 } 00:14:43.084 } 00:14:43.084 ] 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "subsystem": "nvmf", 00:14:43.084 "config": [ 00:14:43.084 { 00:14:43.084 "method": "nvmf_set_config", 00:14:43.084 "params": { 00:14:43.084 "discovery_filter": "match_any", 00:14:43.084 "admin_cmd_passthru": { 00:14:43.084 "identify_ctrlr": false 00:14:43.084 }, 00:14:43.084 "dhchap_digests": [ 00:14:43.084 "sha256", 00:14:43.084 "sha384", 00:14:43.084 "sha512" 00:14:43.084 ], 00:14:43.084 "dhchap_dhgroups": [ 00:14:43.084 "null", 00:14:43.084 "ffdhe2048", 00:14:43.084 "ffdhe3072", 00:14:43.084 "ffdhe4096", 00:14:43.084 "ffdhe6144", 00:14:43.084 "ffdhe8192" 00:14:43.084 ] 00:14:43.084 } 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "method": "nvmf_set_max_subsystems", 00:14:43.084 "params": { 00:14:43.084 "max_subsystems": 1024 00:14:43.084 } 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "method": "nvmf_set_crdt", 00:14:43.084 "params": { 00:14:43.084 "crdt1": 0, 00:14:43.084 "crdt2": 0, 00:14:43.084 "crdt3": 0 00:14:43.084 } 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "method": "nvmf_create_transport", 00:14:43.084 "params": { 00:14:43.084 "trtype": "TCP", 00:14:43.084 "max_queue_depth": 128, 00:14:43.084 "max_io_qpairs_per_ctrlr": 127, 00:14:43.084 "in_capsule_data_size": 4096, 00:14:43.084 "max_io_size": 131072, 00:14:43.084 "io_unit_size": 131072, 00:14:43.084 "max_aq_depth": 128, 00:14:43.084 "num_shared_buffers": 511, 00:14:43.084 "buf_cache_size": 4294967295, 00:14:43.084 "dif_insert_or_strip": false, 00:14:43.084 "zcopy": false, 00:14:43.084 "c2h_success": false, 00:14:43.084 "sock_priority": 0, 00:14:43.084 "abort_timeout_sec": 1, 00:14:43.084 "ack_timeout": 0, 00:14:43.084 "data_wr_pool_size": 0 00:14:43.084 } 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "method": "nvmf_create_subsystem", 00:14:43.084 "params": { 00:14:43.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.084 "allow_any_host": false, 00:14:43.084 "serial_number": "SPDK00000000000001", 00:14:43.084 "model_number": "SPDK bdev Controller", 00:14:43.084 "max_namespaces": 10, 00:14:43.084 "min_cntlid": 1, 00:14:43.084 "max_cntlid": 65519, 00:14:43.084 "ana_reporting": false 00:14:43.084 } 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "method": "nvmf_subsystem_add_host", 00:14:43.084 "params": { 00:14:43.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.084 "host": "nqn.2016-06.io.spdk:host1", 00:14:43.084 "psk": "key0" 00:14:43.084 } 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "method": "nvmf_subsystem_add_ns", 00:14:43.084 "params": { 00:14:43.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.084 "namespace": { 00:14:43.084 "nsid": 1, 00:14:43.084 "bdev_name": "malloc0", 00:14:43.084 "nguid": "1E40C7FEDEE74DF29F6729D8761AF212", 00:14:43.084 "uuid": "1e40c7fe-dee7-4df2-9f67-29d8761af212", 00:14:43.084 "no_auto_visible": false 00:14:43.084 } 00:14:43.084 } 00:14:43.084 }, 00:14:43.084 { 00:14:43.084 "method": "nvmf_subsystem_add_listener", 00:14:43.084 "params": { 00:14:43.084 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.084 "listen_address": { 00:14:43.084 "trtype": "TCP", 00:14:43.084 "adrfam": "IPv4", 00:14:43.084 "traddr": "10.0.0.3", 00:14:43.084 "trsvcid": "4420" 00:14:43.084 }, 00:14:43.084 "secure_channel": true 00:14:43.084 } 00:14:43.084 } 00:14:43.084 ] 00:14:43.084 } 00:14:43.084 ] 00:14:43.084 }' 00:14:43.084 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:43.343 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:14:43.343 "subsystems": [ 00:14:43.343 { 00:14:43.343 "subsystem": "keyring", 00:14:43.343 "config": [ 00:14:43.343 { 00:14:43.343 "method": "keyring_file_add_key", 00:14:43.343 "params": { 00:14:43.343 "name": "key0", 00:14:43.343 "path": "/tmp/tmp.DlhSamf6Nn" 00:14:43.343 } 00:14:43.343 } 00:14:43.343 ] 00:14:43.343 }, 00:14:43.343 { 00:14:43.343 "subsystem": "iobuf", 00:14:43.343 "config": [ 00:14:43.343 { 00:14:43.343 "method": "iobuf_set_options", 00:14:43.343 "params": { 00:14:43.343 "small_pool_count": 8192, 00:14:43.343 "large_pool_count": 1024, 00:14:43.343 "small_bufsize": 8192, 00:14:43.343 "large_bufsize": 135168, 00:14:43.343 "enable_numa": false 00:14:43.343 } 00:14:43.343 } 00:14:43.343 ] 00:14:43.343 }, 00:14:43.343 { 00:14:43.343 "subsystem": "sock", 00:14:43.343 "config": [ 00:14:43.343 { 00:14:43.343 "method": "sock_set_default_impl", 00:14:43.343 "params": { 00:14:43.343 "impl_name": "uring" 00:14:43.343 } 00:14:43.343 }, 00:14:43.343 { 00:14:43.343 "method": "sock_impl_set_options", 00:14:43.343 "params": { 00:14:43.343 "impl_name": "ssl", 00:14:43.343 "recv_buf_size": 4096, 00:14:43.343 "send_buf_size": 4096, 00:14:43.343 "enable_recv_pipe": true, 00:14:43.343 "enable_quickack": false, 00:14:43.343 "enable_placement_id": 0, 00:14:43.343 "enable_zerocopy_send_server": true, 00:14:43.343 "enable_zerocopy_send_client": false, 00:14:43.343 "zerocopy_threshold": 0, 00:14:43.343 "tls_version": 0, 00:14:43.343 "enable_ktls": false 00:14:43.343 } 00:14:43.343 }, 00:14:43.343 { 00:14:43.343 "method": "sock_impl_set_options", 00:14:43.343 "params": { 00:14:43.343 "impl_name": "posix", 00:14:43.343 "recv_buf_size": 2097152, 00:14:43.343 "send_buf_size": 2097152, 00:14:43.343 "enable_recv_pipe": true, 00:14:43.343 "enable_quickack": false, 00:14:43.343 "enable_placement_id": 0, 00:14:43.343 "enable_zerocopy_send_server": true, 00:14:43.343 "enable_zerocopy_send_client": false, 00:14:43.343 "zerocopy_threshold": 0, 00:14:43.343 "tls_version": 0, 00:14:43.343 "enable_ktls": false 00:14:43.343 } 00:14:43.343 }, 00:14:43.343 { 00:14:43.343 "method": "sock_impl_set_options", 00:14:43.343 "params": { 00:14:43.343 "impl_name": "uring", 00:14:43.343 "recv_buf_size": 2097152, 00:14:43.343 "send_buf_size": 2097152, 00:14:43.343 "enable_recv_pipe": true, 00:14:43.343 "enable_quickack": false, 00:14:43.343 "enable_placement_id": 0, 00:14:43.343 "enable_zerocopy_send_server": false, 00:14:43.343 "enable_zerocopy_send_client": false, 00:14:43.343 "zerocopy_threshold": 0, 00:14:43.343 "tls_version": 0, 00:14:43.343 "enable_ktls": false 00:14:43.343 } 00:14:43.343 } 00:14:43.343 ] 00:14:43.343 }, 00:14:43.343 { 00:14:43.343 "subsystem": "vmd", 00:14:43.343 "config": [] 00:14:43.343 }, 00:14:43.343 { 00:14:43.343 "subsystem": "accel", 00:14:43.343 "config": [ 00:14:43.343 { 00:14:43.343 "method": "accel_set_options", 00:14:43.343 "params": { 00:14:43.343 "small_cache_size": 128, 00:14:43.343 "large_cache_size": 16, 00:14:43.343 "task_count": 2048, 00:14:43.343 "sequence_count": 2048, 00:14:43.343 "buf_count": 2048 00:14:43.343 } 00:14:43.343 } 00:14:43.343 ] 00:14:43.343 }, 00:14:43.343 { 00:14:43.343 "subsystem": "bdev", 00:14:43.343 "config": [ 00:14:43.343 { 00:14:43.343 "method": "bdev_set_options", 00:14:43.343 "params": { 00:14:43.343 "bdev_io_pool_size": 65535, 00:14:43.343 "bdev_io_cache_size": 256, 00:14:43.343 "bdev_auto_examine": true, 00:14:43.343 "iobuf_small_cache_size": 128, 00:14:43.343 "iobuf_large_cache_size": 16 00:14:43.343 } 00:14:43.343 }, 00:14:43.343 { 00:14:43.343 "method": "bdev_raid_set_options", 00:14:43.343 "params": { 00:14:43.343 "process_window_size_kb": 1024, 00:14:43.343 "process_max_bandwidth_mb_sec": 0 00:14:43.343 } 00:14:43.343 }, 00:14:43.343 { 00:14:43.343 "method": "bdev_iscsi_set_options", 00:14:43.343 "params": { 00:14:43.343 "timeout_sec": 30 00:14:43.343 } 00:14:43.343 }, 00:14:43.343 { 00:14:43.343 "method": "bdev_nvme_set_options", 00:14:43.343 "params": { 00:14:43.343 "action_on_timeout": "none", 00:14:43.343 "timeout_us": 0, 00:14:43.343 "timeout_admin_us": 0, 00:14:43.343 "keep_alive_timeout_ms": 10000, 00:14:43.343 "arbitration_burst": 0, 00:14:43.343 "low_priority_weight": 0, 00:14:43.343 "medium_priority_weight": 0, 00:14:43.343 "high_priority_weight": 0, 00:14:43.344 "nvme_adminq_poll_period_us": 10000, 00:14:43.344 "nvme_ioq_poll_period_us": 0, 00:14:43.344 "io_queue_requests": 512, 00:14:43.344 "delay_cmd_submit": true, 00:14:43.344 "transport_retry_count": 4, 00:14:43.344 "bdev_retry_count": 3, 00:14:43.344 "transport_ack_timeout": 0, 00:14:43.344 "ctrlr_loss_timeout_sec": 0, 00:14:43.344 "reconnect_delay_sec": 0, 00:14:43.344 "fast_io_fail_timeout_sec": 0, 00:14:43.344 "disable_auto_failback": false, 00:14:43.344 "generate_uuids": false, 00:14:43.344 "transport_tos": 0, 00:14:43.344 "nvme_error_stat": false, 00:14:43.344 "rdma_srq_size": 0, 00:14:43.344 "io_path_stat": false, 00:14:43.344 "allow_accel_sequence": false, 00:14:43.344 "rdma_max_cq_size": 0, 00:14:43.344 "rdma_cm_event_timeout_ms": 0, 00:14:43.344 "dhchap_digests": [ 00:14:43.344 "sha256", 00:14:43.344 "sha384", 00:14:43.344 "sha512" 00:14:43.344 ], 00:14:43.344 "dhchap_dhgroups": [ 00:14:43.344 "null", 00:14:43.344 "ffdhe2048", 00:14:43.344 "ffdhe3072", 00:14:43.344 "ffdhe4096", 00:14:43.344 "ffdhe6144", 00:14:43.344 "ffdhe8192" 00:14:43.344 ] 00:14:43.344 } 00:14:43.344 }, 00:14:43.344 { 00:14:43.344 "method": "bdev_nvme_attach_controller", 00:14:43.344 "params": { 00:14:43.344 "name": "TLSTEST", 00:14:43.344 "trtype": "TCP", 00:14:43.344 "adrfam": "IPv4", 00:14:43.344 "traddr": "10.0.0.3", 00:14:43.344 "trsvcid": "4420", 00:14:43.344 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.344 "prchk_reftag": false, 00:14:43.344 "prchk_guard": false, 00:14:43.344 "ctrlr_loss_timeout_sec": 0, 00:14:43.344 "reconnect_delay_sec": 0, 00:14:43.344 "fast_io_fail_timeout_sec": 0, 00:14:43.344 "psk": "key0", 00:14:43.344 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.344 "hdgst": false, 00:14:43.344 "ddgst": false, 00:14:43.344 "multipath": "multipath" 00:14:43.344 } 00:14:43.344 }, 00:14:43.344 { 00:14:43.344 "method": "bdev_nvme_set_hotplug", 00:14:43.344 "params": { 00:14:43.344 "period_us": 100000, 00:14:43.344 "enable": false 00:14:43.344 } 00:14:43.344 }, 00:14:43.344 { 00:14:43.344 "method": "bdev_wait_for_examine" 00:14:43.344 } 00:14:43.344 ] 00:14:43.344 }, 00:14:43.344 { 00:14:43.344 "subsystem": "nbd", 00:14:43.344 "config": [] 00:14:43.344 } 00:14:43.344 ] 00:14:43.344 }' 00:14:43.344 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 72420 00:14:43.344 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72420 ']' 00:14:43.344 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72420 00:14:43.344 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:43.344 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.344 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72420 00:14:43.602 killing process with pid 72420 00:14:43.602 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.602 00:14:43.602 Latency(us) 00:14:43.602 [2024-11-20T16:02:41.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.602 [2024-11-20T16:02:41.852Z] =================================================================================================================== 00:14:43.602 [2024-11-20T16:02:41.852Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.602 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:43.602 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:43.602 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72420' 00:14:43.602 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72420 00:14:43.602 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72420 00:14:43.602 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 72366 00:14:43.602 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72366 ']' 00:14:43.602 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72366 00:14:43.602 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:43.602 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.602 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72366 00:14:43.861 killing process with pid 72366 00:14:43.861 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:43.861 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:43.861 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72366' 00:14:43.861 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72366 00:14:43.861 16:02:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72366 00:14:43.861 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:43.861 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:43.861 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.861 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.861 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:14:43.861 "subsystems": [ 00:14:43.861 { 00:14:43.861 "subsystem": "keyring", 00:14:43.861 "config": [ 00:14:43.861 { 00:14:43.861 "method": "keyring_file_add_key", 00:14:43.861 "params": { 00:14:43.861 "name": "key0", 00:14:43.861 "path": "/tmp/tmp.DlhSamf6Nn" 00:14:43.861 } 00:14:43.861 } 00:14:43.861 ] 00:14:43.861 }, 00:14:43.861 { 00:14:43.861 "subsystem": "iobuf", 00:14:43.861 "config": [ 00:14:43.861 { 00:14:43.861 "method": "iobuf_set_options", 00:14:43.861 "params": { 00:14:43.861 "small_pool_count": 8192, 00:14:43.861 "large_pool_count": 1024, 00:14:43.861 "small_bufsize": 8192, 00:14:43.861 "large_bufsize": 135168, 00:14:43.861 "enable_numa": false 00:14:43.861 } 00:14:43.861 } 00:14:43.861 ] 00:14:43.861 }, 00:14:43.861 { 00:14:43.861 "subsystem": "sock", 00:14:43.861 "config": [ 00:14:43.861 { 00:14:43.861 "method": "sock_set_default_impl", 00:14:43.861 "params": { 00:14:43.861 "impl_name": "uring" 00:14:43.861 } 00:14:43.861 }, 00:14:43.861 { 00:14:43.861 "method": "sock_impl_set_options", 00:14:43.861 "params": { 00:14:43.861 "impl_name": "ssl", 00:14:43.861 "recv_buf_size": 4096, 00:14:43.861 "send_buf_size": 4096, 00:14:43.861 "enable_recv_pipe": true, 00:14:43.861 "enable_quickack": false, 00:14:43.861 "enable_placement_id": 0, 00:14:43.861 "enable_zerocopy_send_server": true, 00:14:43.861 "enable_zerocopy_send_client": false, 00:14:43.861 "zerocopy_threshold": 0, 00:14:43.861 "tls_version": 0, 00:14:43.861 "enable_ktls": false 00:14:43.861 } 00:14:43.861 }, 00:14:43.861 { 00:14:43.861 "method": "sock_impl_set_options", 00:14:43.861 "params": { 00:14:43.861 "impl_name": "posix", 00:14:43.861 "recv_buf_size": 2097152, 00:14:43.861 "send_buf_size": 2097152, 00:14:43.861 "enable_recv_pipe": true, 00:14:43.861 "enable_quickack": false, 00:14:43.861 "enable_placement_id": 0, 00:14:43.861 "enable_zerocopy_send_server": true, 00:14:43.861 "enable_zerocopy_send_client": false, 00:14:43.861 "zerocopy_threshold": 0, 00:14:43.861 "tls_version": 0, 00:14:43.861 "enable_ktls": false 00:14:43.861 } 00:14:43.861 }, 00:14:43.861 { 00:14:43.861 "method": "sock_impl_set_options", 00:14:43.861 "params": { 00:14:43.861 "impl_name": "uring", 00:14:43.861 "recv_buf_size": 2097152, 00:14:43.861 "send_buf_size": 2097152, 00:14:43.861 "enable_recv_pipe": true, 00:14:43.861 "enable_quickack": false, 00:14:43.861 "enable_placement_id": 0, 00:14:43.861 "enable_zerocopy_send_server": false, 00:14:43.861 "enable_zerocopy_send_client": false, 00:14:43.861 "zerocopy_threshold": 0, 00:14:43.861 "tls_version": 0, 00:14:43.861 "enable_ktls": false 00:14:43.861 } 00:14:43.861 } 00:14:43.861 ] 00:14:43.861 }, 00:14:43.861 { 00:14:43.861 "subsystem": "vmd", 00:14:43.861 "config": [] 00:14:43.861 }, 00:14:43.861 { 00:14:43.861 "subsystem": "accel", 00:14:43.861 "config": [ 00:14:43.861 { 00:14:43.861 "method": "accel_set_options", 00:14:43.861 "params": { 00:14:43.861 "small_cache_size": 128, 00:14:43.861 "large_cache_size": 16, 00:14:43.861 "task_count": 2048, 00:14:43.861 "sequence_count": 2048, 00:14:43.861 "buf_count": 2048 00:14:43.861 } 00:14:43.861 } 00:14:43.861 ] 00:14:43.861 }, 00:14:43.861 { 00:14:43.861 "subsystem": "bdev", 00:14:43.861 "config": [ 00:14:43.861 { 00:14:43.861 "method": "bdev_set_options", 00:14:43.861 "params": { 00:14:43.861 "bdev_io_pool_size": 65535, 00:14:43.861 "bdev_io_cache_size": 256, 00:14:43.861 "bdev_auto_examine": true, 00:14:43.861 "iobuf_small_cache_size": 128, 00:14:43.861 "iobuf_large_cache_size": 16 00:14:43.861 } 00:14:43.861 }, 00:14:43.861 { 00:14:43.861 "method": "bdev_raid_set_options", 00:14:43.861 "params": { 00:14:43.861 "process_window_size_kb": 1024, 00:14:43.861 "process_max_bandwidth_mb_sec": 0 00:14:43.862 } 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "method": "bdev_iscsi_set_options", 00:14:43.862 "params": { 00:14:43.862 "timeout_sec": 30 00:14:43.862 } 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "method": "bdev_nvme_set_options", 00:14:43.862 "params": { 00:14:43.862 "action_on_timeout": "none", 00:14:43.862 "timeout_us": 0, 00:14:43.862 "timeout_admin_us": 0, 00:14:43.862 "keep_alive_timeout_ms": 10000, 00:14:43.862 "arbitration_burst": 0, 00:14:43.862 "low_priority_weight": 0, 00:14:43.862 "medium_priority_weight": 0, 00:14:43.862 "high_priority_weight": 0, 00:14:43.862 "nvme_adminq_poll_period_us": 10000, 00:14:43.862 "nvme_ioq_poll_period_us": 0, 00:14:43.862 "io_queue_requests": 0, 00:14:43.862 "delay_cmd_submit": true, 00:14:43.862 "transport_retry_count": 4, 00:14:43.862 "bdev_retry_count": 3, 00:14:43.862 "transport_ack_timeout": 0, 00:14:43.862 "ctrlr_loss_timeout_sec": 0, 00:14:43.862 "reconnect_delay_sec": 0, 00:14:43.862 "fast_io_fail_timeout_sec": 0, 00:14:43.862 "disable_auto_failback": false, 00:14:43.862 "generate_uuids": false, 00:14:43.862 "transport_tos": 0, 00:14:43.862 "nvme_error_stat": false, 00:14:43.862 "rdma_srq_size": 0, 00:14:43.862 "io_path_stat": false, 00:14:43.862 "allow_accel_sequence": false, 00:14:43.862 "rdma_max_cq_size": 0, 00:14:43.862 "rdma_cm_event_timeout_ms": 0, 00:14:43.862 "dhchap_digests": [ 00:14:43.862 "sha256", 00:14:43.862 "sha384", 00:14:43.862 "sha512" 00:14:43.862 ], 00:14:43.862 "dhchap_dhgroups": [ 00:14:43.862 "null", 00:14:43.862 "ffdhe2048", 00:14:43.862 "ffdhe3072", 00:14:43.862 "ffdhe4096", 00:14:43.862 "ffdhe6144", 00:14:43.862 "ffdhe8192" 00:14:43.862 ] 00:14:43.862 } 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "method": "bdev_nvme_set_hotplug", 00:14:43.862 "params": { 00:14:43.862 "period_us": 100000, 00:14:43.862 "enable": false 00:14:43.862 } 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "method": "bdev_malloc_create", 00:14:43.862 "params": { 00:14:43.862 "name": "malloc0", 00:14:43.862 "num_blocks": 8192, 00:14:43.862 "block_size": 4096, 00:14:43.862 "physical_block_size": 4096, 00:14:43.862 "uuid": "1e40c7fe-dee7-4df2-9f67-29d8761af212", 00:14:43.862 "optimal_io_boundary": 0, 00:14:43.862 "md_size": 0, 00:14:43.862 "dif_type": 0, 00:14:43.862 "dif_is_head_of_md": false, 00:14:43.862 "dif_pi_format": 0 00:14:43.862 } 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "method": "bdev_wait_for_examine" 00:14:43.862 } 00:14:43.862 ] 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "subsystem": "nbd", 00:14:43.862 "config": [] 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "subsystem": "scheduler", 00:14:43.862 "config": [ 00:14:43.862 { 00:14:43.862 "method": "framework_set_scheduler", 00:14:43.862 "params": { 00:14:43.862 "name": "static" 00:14:43.862 } 00:14:43.862 } 00:14:43.862 ] 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "subsystem": "nvmf", 00:14:43.862 "config": [ 00:14:43.862 { 00:14:43.862 "method": "nvmf_set_config", 00:14:43.862 "params": { 00:14:43.862 "discovery_filter": "match_any", 00:14:43.862 "admin_cmd_passthru": { 00:14:43.862 "identify_ctrlr": false 00:14:43.862 }, 00:14:43.862 "dhchap_digests": [ 00:14:43.862 "sha256", 00:14:43.862 "sha384", 00:14:43.862 "sha512" 00:14:43.862 ], 00:14:43.862 "dhchap_dhgroups": [ 00:14:43.862 "null", 00:14:43.862 "ffdhe2048", 00:14:43.862 "ffdhe3072", 00:14:43.862 "ffdhe4096", 00:14:43.862 "ffdhe6144", 00:14:43.862 "ffdhe8192" 00:14:43.862 ] 00:14:43.862 } 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "method": "nvmf_set_max_subsystems", 00:14:43.862 "params": { 00:14:43.862 "max_subsystems": 1024 00:14:43.862 } 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "method": "nvmf_set_crdt", 00:14:43.862 "params": { 00:14:43.862 "crdt1": 0, 00:14:43.862 "crdt2": 0, 00:14:43.862 "crdt3": 0 00:14:43.862 } 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "method": "nvmf_create_transport", 00:14:43.862 "params": { 00:14:43.862 "trtype": "TCP", 00:14:43.862 "max_queue_depth": 128, 00:14:43.862 "max_io_qpairs_per_ctrlr": 127, 00:14:43.862 "in_capsule_data_size": 4096, 00:14:43.862 "max_io_size": 131072, 00:14:43.862 "io_unit_size": 131072, 00:14:43.862 "max_aq_depth": 128, 00:14:43.862 "num_shared_buffers": 511, 00:14:43.862 "buf_cache_size": 4294967295, 00:14:43.862 "dif_insert_or_strip": false, 00:14:43.862 "zcopy": false, 00:14:43.862 "c2h_success": false, 00:14:43.862 "sock_priority": 0, 00:14:43.862 "abort_timeout_sec": 1, 00:14:43.862 "ack_timeout": 0, 00:14:43.862 "data_wr_pool_size": 0 00:14:43.862 } 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "method": "nvmf_create_subsystem", 00:14:43.862 "params": { 00:14:43.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.862 "allow_any_host": false, 00:14:43.862 "serial_number": "SPDK00000000000001", 00:14:43.862 "model_number": "SPDK bdev Controller", 00:14:43.862 "max_namespaces": 10, 00:14:43.862 "min_cntlid": 1, 00:14:43.862 "max_cntlid": 65519, 00:14:43.862 "ana_reporting": false 00:14:43.862 } 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "method": "nvmf_subsystem_add_host", 00:14:43.862 "params": { 00:14:43.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.862 "host": "nqn.2016-06.io.spdk:host1", 00:14:43.862 "psk": "key0" 00:14:43.862 } 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "method": "nvmf_subsystem_add_ns", 00:14:43.862 "params": { 00:14:43.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.862 "namespace": { 00:14:43.862 "nsid": 1, 00:14:43.862 "bdev_name": "malloc0", 00:14:43.862 "nguid": "1E40C7FEDEE74DF29F6729D8761AF212", 00:14:43.862 "uuid": "1e40c7fe-dee7-4df2-9f67-29d8761af212", 00:14:43.862 "no_auto_visible": false 00:14:43.862 } 00:14:43.862 } 00:14:43.862 }, 00:14:43.862 { 00:14:43.862 "method": "nvmf_subsystem_add_listener", 00:14:43.862 "params": { 00:14:43.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.862 "listen_address": { 00:14:43.862 "trtype": "TCP", 00:14:43.862 "adrfam": "IPv4", 00:14:43.862 "traddr": "10.0.0.3", 00:14:43.862 "trsvcid": "4420" 00:14:43.862 }, 00:14:43.862 "secure_channel": true 00:14:43.862 } 00:14:43.862 } 00:14:43.862 ] 00:14:43.862 } 00:14:43.862 ] 00:14:43.862 }' 00:14:43.862 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72475 00:14:43.862 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:43.862 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72475 00:14:43.862 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72475 ']' 00:14:43.862 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.862 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.862 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.862 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.862 16:02:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.121 [2024-11-20 16:02:42.124063] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:44.121 [2024-11-20 16:02:42.124179] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.121 [2024-11-20 16:02:42.272448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.121 [2024-11-20 16:02:42.337403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.121 [2024-11-20 16:02:42.337468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.121 [2024-11-20 16:02:42.337480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.121 [2024-11-20 16:02:42.337489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.121 [2024-11-20 16:02:42.337496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.121 [2024-11-20 16:02:42.337978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.378 [2024-11-20 16:02:42.506192] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.378 [2024-11-20 16:02:42.590120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.378 [2024-11-20 16:02:42.622059] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:44.378 [2024-11-20 16:02:42.622324] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=72507 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 72507 /var/tmp/bdevperf.sock 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72507 ']' 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.308 16:02:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:14:45.308 "subsystems": [ 00:14:45.308 { 00:14:45.308 "subsystem": "keyring", 00:14:45.308 "config": [ 00:14:45.308 { 00:14:45.308 "method": "keyring_file_add_key", 00:14:45.308 "params": { 00:14:45.308 "name": "key0", 00:14:45.308 "path": "/tmp/tmp.DlhSamf6Nn" 00:14:45.308 } 00:14:45.308 } 00:14:45.308 ] 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "subsystem": "iobuf", 00:14:45.308 "config": [ 00:14:45.308 { 00:14:45.308 "method": "iobuf_set_options", 00:14:45.308 "params": { 00:14:45.308 "small_pool_count": 8192, 00:14:45.308 "large_pool_count": 1024, 00:14:45.308 "small_bufsize": 8192, 00:14:45.308 "large_bufsize": 135168, 00:14:45.308 "enable_numa": false 00:14:45.308 } 00:14:45.308 } 00:14:45.308 ] 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "subsystem": "sock", 00:14:45.308 "config": [ 00:14:45.308 { 00:14:45.308 "method": "sock_set_default_impl", 00:14:45.308 "params": { 00:14:45.308 "impl_name": "uring" 00:14:45.308 } 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "method": "sock_impl_set_options", 00:14:45.308 "params": { 00:14:45.308 "impl_name": "ssl", 00:14:45.308 "recv_buf_size": 4096, 00:14:45.308 "send_buf_size": 4096, 00:14:45.308 "enable_recv_pipe": true, 00:14:45.308 "enable_quickack": false, 00:14:45.308 "enable_placement_id": 0, 00:14:45.308 "enable_zerocopy_send_server": true, 00:14:45.308 "enable_zerocopy_send_client": false, 00:14:45.308 "zerocopy_threshold": 0, 00:14:45.308 "tls_version": 0, 00:14:45.308 "enable_ktls": false 00:14:45.308 } 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "method": "sock_impl_set_options", 00:14:45.308 "params": { 00:14:45.308 "impl_name": "posix", 00:14:45.308 "recv_buf_size": 2097152, 00:14:45.308 "send_buf_size": 2097152, 00:14:45.308 "enable_recv_pipe": true, 00:14:45.308 "enable_quickack": false, 00:14:45.308 "enable_placement_id": 0, 00:14:45.308 "enable_zerocopy_send_server": true, 00:14:45.308 "enable_zerocopy_send_client": false, 00:14:45.308 "zerocopy_threshold": 0, 00:14:45.308 "tls_version": 0, 00:14:45.308 "enable_ktls": false 00:14:45.308 } 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "method": "sock_impl_set_options", 00:14:45.308 "params": { 00:14:45.308 "impl_name": "uring", 00:14:45.308 "recv_buf_size": 2097152, 00:14:45.308 "send_buf_size": 2097152, 00:14:45.308 "enable_recv_pipe": true, 00:14:45.308 "enable_quickack": false, 00:14:45.308 "enable_placement_id": 0, 00:14:45.308 "enable_zerocopy_send_server": false, 00:14:45.308 "enable_zerocopy_send_client": false, 00:14:45.308 "zerocopy_threshold": 0, 00:14:45.308 "tls_version": 0, 00:14:45.308 "enable_ktls": false 00:14:45.308 } 00:14:45.308 } 00:14:45.308 ] 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "subsystem": "vmd", 00:14:45.308 "config": [] 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "subsystem": "accel", 00:14:45.308 "config": [ 00:14:45.308 { 00:14:45.308 "method": "accel_set_options", 00:14:45.308 "params": { 00:14:45.308 "small_cache_size": 128, 00:14:45.308 "large_cache_size": 16, 00:14:45.308 "task_count": 2048, 00:14:45.308 "sequence_count": 2048, 00:14:45.308 "buf_count": 2048 00:14:45.308 } 00:14:45.308 } 00:14:45.308 ] 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "subsystem": "bdev", 00:14:45.308 "config": [ 00:14:45.308 { 00:14:45.308 "method": "bdev_set_options", 00:14:45.308 "params": { 00:14:45.308 "bdev_io_pool_size": 65535, 00:14:45.308 "bdev_io_cache_size": 256, 00:14:45.308 "bdev_auto_examine": true, 00:14:45.308 "iobuf_small_cache_size": 128, 00:14:45.308 "iobuf_large_cache_size": 16 00:14:45.308 } 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "method": "bdev_raid_set_options", 00:14:45.308 "params": { 00:14:45.308 "process_window_size_kb": 1024, 00:14:45.308 "process_max_bandwidth_mb_sec": 0 00:14:45.308 } 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "method": "bdev_iscsi_set_options", 00:14:45.308 "params": { 00:14:45.308 "timeout_sec": 30 00:14:45.308 } 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "method": "bdev_nvme_set_options", 00:14:45.308 "params": { 00:14:45.308 "action_on_timeout": "none", 00:14:45.308 "timeout_us": 0, 00:14:45.308 "timeout_admin_us": 0, 00:14:45.308 "keep_alive_timeout_ms": 10000, 00:14:45.308 "arbitration_burst": 0, 00:14:45.308 "low_priority_weight": 0, 00:14:45.308 "medium_priority_weight": 0, 00:14:45.308 "high_priority_weight": 0, 00:14:45.308 "nvme_adminq_poll_period_us": 10000, 00:14:45.308 "nvme_ioq_poll_period_us": 0, 00:14:45.308 "io_queue_requests": 512, 00:14:45.308 "delay_cmd_submit": true, 00:14:45.308 "transport_retry_count": 4, 00:14:45.308 "bdev_retry_count": 3, 00:14:45.308 "transport_ack_timeout": 0, 00:14:45.308 "ctrlr_loss_timeout_sec": 0, 00:14:45.308 "reconnect_delay_sec": 0, 00:14:45.308 "fast_io_fail_timeout_sec": 0, 00:14:45.308 "disable_auto_failback": false, 00:14:45.308 "generate_uuids": false, 00:14:45.308 "transport_tos": 0, 00:14:45.308 "nvme_error_stat": false, 00:14:45.308 "rdma_srq_size": 0, 00:14:45.308 "io_path_stat": false, 00:14:45.308 "allow_accel_sequence": false, 00:14:45.308 "rdma_max_cq_size": 0, 00:14:45.308 "rdma_cm_event_timeout_ms": 0, 00:14:45.308 "dhchap_digests": [ 00:14:45.308 "sha256", 00:14:45.308 "sha384", 00:14:45.308 "sha512" 00:14:45.308 ], 00:14:45.308 "dhchap_dhgroups": [ 00:14:45.308 "null", 00:14:45.308 "ffdhe2048", 00:14:45.308 "ffdhe3072", 00:14:45.308 "ffdhe4096", 00:14:45.308 "ffdhe6144", 00:14:45.308 "ffdhe8192" 00:14:45.308 ] 00:14:45.308 } 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "method": "bdev_nvme_attach_controller", 00:14:45.308 "params": { 00:14:45.308 "name": "TLSTEST", 00:14:45.308 "trtype": "TCP", 00:14:45.308 "adrfam": "IPv4", 00:14:45.308 "traddr": "10.0.0.3", 00:14:45.308 "trsvcid": "4420", 00:14:45.308 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.308 "prchk_reftag": false, 00:14:45.308 "prchk_guard": false, 00:14:45.308 "ctrlr_loss_timeout_sec": 0, 00:14:45.308 "reconnect_delay_sec": 0, 00:14:45.308 "fast_io_fail_timeout_sec": 0, 00:14:45.308 "psk": "key0", 00:14:45.308 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:45.308 "hdgst": false, 00:14:45.308 "ddgst": false, 00:14:45.308 "multipath": "multipath" 00:14:45.308 } 00:14:45.308 }, 00:14:45.308 { 00:14:45.308 "method": "bdev_nvme_set_hotplug", 00:14:45.308 "params": { 00:14:45.309 "period_us": 100000, 00:14:45.309 "enable": false 00:14:45.309 } 00:14:45.309 }, 00:14:45.309 { 00:14:45.309 "method": "bdev_wait_for_examine" 00:14:45.309 } 00:14:45.309 ] 00:14:45.309 }, 00:14:45.309 { 00:14:45.309 "subsystem": "nbd", 00:14:45.309 "config": [] 00:14:45.309 } 00:14:45.309 ] 00:14:45.309 }' 00:14:45.309 [2024-11-20 16:02:43.322357] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:45.309 [2024-11-20 16:02:43.323449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72507 ] 00:14:45.309 [2024-11-20 16:02:43.470864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.309 [2024-11-20 16:02:43.538492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.566 [2024-11-20 16:02:43.675646] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.566 [2024-11-20 16:02:43.729195] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:46.135 16:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.135 16:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:46.135 16:02:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:46.413 Running I/O for 10 seconds... 00:14:48.283 3939.00 IOPS, 15.39 MiB/s [2024-11-20T16:02:47.528Z] 3968.00 IOPS, 15.50 MiB/s [2024-11-20T16:02:48.903Z] 3968.00 IOPS, 15.50 MiB/s [2024-11-20T16:02:49.836Z] 3968.00 IOPS, 15.50 MiB/s [2024-11-20T16:02:50.769Z] 3985.80 IOPS, 15.57 MiB/s [2024-11-20T16:02:51.718Z] 4002.67 IOPS, 15.64 MiB/s [2024-11-20T16:02:52.662Z] 4012.57 IOPS, 15.67 MiB/s [2024-11-20T16:02:53.596Z] 4022.38 IOPS, 15.71 MiB/s [2024-11-20T16:02:54.530Z] 4027.00 IOPS, 15.73 MiB/s [2024-11-20T16:02:54.530Z] 4033.20 IOPS, 15.75 MiB/s 00:14:56.280 Latency(us) 00:14:56.280 [2024-11-20T16:02:54.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.280 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:56.280 Verification LBA range: start 0x0 length 0x2000 00:14:56.280 TLSTESTn1 : 10.02 4039.14 15.78 0.00 0.00 31630.51 6404.65 25976.09 00:14:56.280 [2024-11-20T16:02:54.530Z] =================================================================================================================== 00:14:56.280 [2024-11-20T16:02:54.530Z] Total : 4039.14 15.78 0.00 0.00 31630.51 6404.65 25976.09 00:14:56.280 { 00:14:56.280 "results": [ 00:14:56.280 { 00:14:56.280 "job": "TLSTESTn1", 00:14:56.280 "core_mask": "0x4", 00:14:56.280 "workload": "verify", 00:14:56.280 "status": "finished", 00:14:56.280 "verify_range": { 00:14:56.280 "start": 0, 00:14:56.280 "length": 8192 00:14:56.280 }, 00:14:56.280 "queue_depth": 128, 00:14:56.280 "io_size": 4096, 00:14:56.280 "runtime": 10.016742, 00:14:56.280 "iops": 4039.1376757033377, 00:14:56.280 "mibps": 15.777881545716163, 00:14:56.280 "io_failed": 0, 00:14:56.280 "io_timeout": 0, 00:14:56.280 "avg_latency_us": 31630.51160011594, 00:14:56.280 "min_latency_us": 6404.654545454546, 00:14:56.280 "max_latency_us": 25976.087272727273 00:14:56.280 } 00:14:56.280 ], 00:14:56.280 "core_count": 1 00:14:56.280 } 00:14:56.537 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:56.537 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 72507 00:14:56.537 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72507 ']' 00:14:56.537 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72507 00:14:56.537 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:56.537 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.537 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72507 00:14:56.537 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:56.537 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:56.537 killing process with pid 72507 00:14:56.537 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72507' 00:14:56.537 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72507 00:14:56.537 Received shutdown signal, test time was about 10.000000 seconds 00:14:56.537 00:14:56.537 Latency(us) 00:14:56.537 [2024-11-20T16:02:54.787Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.537 [2024-11-20T16:02:54.787Z] =================================================================================================================== 00:14:56.538 [2024-11-20T16:02:54.788Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:56.538 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72507 00:14:56.538 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 72475 00:14:56.538 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72475 ']' 00:14:56.538 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72475 00:14:56.538 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:56.538 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.538 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72475 00:14:56.795 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:56.795 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:56.795 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72475' 00:14:56.795 killing process with pid 72475 00:14:56.795 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72475 00:14:56.795 16:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72475 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72646 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72646 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72646 ']' 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.795 16:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.053 [2024-11-20 16:02:55.074355] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:14:57.053 [2024-11-20 16:02:55.074469] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.053 [2024-11-20 16:02:55.218446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.053 [2024-11-20 16:02:55.280132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.053 [2024-11-20 16:02:55.280201] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.053 [2024-11-20 16:02:55.280213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.053 [2024-11-20 16:02:55.280222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.053 [2024-11-20 16:02:55.280229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.053 [2024-11-20 16:02:55.280647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.311 [2024-11-20 16:02:55.334513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:57.878 16:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.878 16:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:57.878 16:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:57.878 16:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:57.878 16:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.878 16:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.878 16:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.DlhSamf6Nn 00:14:57.878 16:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DlhSamf6Nn 00:14:57.878 16:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:58.191 [2024-11-20 16:02:56.323571] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.191 16:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:58.450 16:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:58.708 [2024-11-20 16:02:56.915687] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:58.708 [2024-11-20 16:02:56.915991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:58.708 16:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:58.967 malloc0 00:14:58.967 16:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:59.226 16:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DlhSamf6Nn 00:14:59.791 16:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:00.048 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72707 00:15:00.048 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.048 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72707 /var/tmp/bdevperf.sock 00:15:00.048 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:00.048 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72707 ']' 00:15:00.048 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.048 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.049 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.049 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.049 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.049 [2024-11-20 16:02:58.136068] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:00.049 [2024-11-20 16:02:58.136197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72707 ] 00:15:00.049 [2024-11-20 16:02:58.281705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.306 [2024-11-20 16:02:58.360422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.306 [2024-11-20 16:02:58.423223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.306 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.306 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:00.306 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DlhSamf6Nn 00:15:00.564 16:02:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:00.822 [2024-11-20 16:02:59.051717] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:01.091 nvme0n1 00:15:01.091 16:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:01.091 Running I/O for 1 seconds... 00:15:02.290 3883.00 IOPS, 15.17 MiB/s 00:15:02.290 Latency(us) 00:15:02.290 [2024-11-20T16:03:00.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.290 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:02.290 Verification LBA range: start 0x0 length 0x2000 00:15:02.290 nvme0n1 : 1.02 3940.41 15.39 0.00 0.00 32134.20 5332.25 24069.59 00:15:02.290 [2024-11-20T16:03:00.540Z] =================================================================================================================== 00:15:02.290 [2024-11-20T16:03:00.540Z] Total : 3940.41 15.39 0.00 0.00 32134.20 5332.25 24069.59 00:15:02.290 { 00:15:02.290 "results": [ 00:15:02.290 { 00:15:02.290 "job": "nvme0n1", 00:15:02.290 "core_mask": "0x2", 00:15:02.290 "workload": "verify", 00:15:02.290 "status": "finished", 00:15:02.290 "verify_range": { 00:15:02.290 "start": 0, 00:15:02.290 "length": 8192 00:15:02.290 }, 00:15:02.290 "queue_depth": 128, 00:15:02.290 "io_size": 4096, 00:15:02.290 "runtime": 1.018167, 00:15:02.290 "iops": 3940.4144899608805, 00:15:02.290 "mibps": 15.39224410140969, 00:15:02.290 "io_failed": 0, 00:15:02.290 "io_timeout": 0, 00:15:02.290 "avg_latency_us": 32134.198480920873, 00:15:02.290 "min_latency_us": 5332.2472727272725, 00:15:02.290 "max_latency_us": 24069.585454545453 00:15:02.290 } 00:15:02.290 ], 00:15:02.290 "core_count": 1 00:15:02.290 } 00:15:02.290 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72707 00:15:02.290 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72707 ']' 00:15:02.290 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72707 00:15:02.290 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:02.290 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.290 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72707 00:15:02.290 killing process with pid 72707 00:15:02.290 Received shutdown signal, test time was about 1.000000 seconds 00:15:02.290 00:15:02.290 Latency(us) 00:15:02.290 [2024-11-20T16:03:00.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.290 [2024-11-20T16:03:00.540Z] =================================================================================================================== 00:15:02.290 [2024-11-20T16:03:00.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:02.290 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:02.290 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:02.290 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72707' 00:15:02.290 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72707 00:15:02.290 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72707 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72646 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72646 ']' 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72646 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72646 00:15:02.548 killing process with pid 72646 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72646' 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72646 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72646 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72745 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72745 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72745 ']' 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.548 16:03:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.806 [2024-11-20 16:03:00.876441] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:02.806 [2024-11-20 16:03:00.876964] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.806 [2024-11-20 16:03:01.039931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.064 [2024-11-20 16:03:01.102229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.064 [2024-11-20 16:03:01.102303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.064 [2024-11-20 16:03:01.102316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.064 [2024-11-20 16:03:01.102325] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.064 [2024-11-20 16:03:01.102332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.064 [2024-11-20 16:03:01.102737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.064 [2024-11-20 16:03:01.156563] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:03.997 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.997 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:03.997 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:03.997 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:03.997 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.997 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.997 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:03.997 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.997 16:03:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.997 [2024-11-20 16:03:01.959642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.997 malloc0 00:15:03.997 [2024-11-20 16:03:01.990616] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:03.997 [2024-11-20 16:03:01.990874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:03.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:03.997 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.997 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72777 00:15:03.997 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72777 /var/tmp/bdevperf.sock 00:15:03.997 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72777 ']' 00:15:03.997 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:03.997 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:03.997 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.997 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:03.998 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.998 16:03:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.998 [2024-11-20 16:03:02.079587] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:03.998 [2024-11-20 16:03:02.079898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72777 ] 00:15:03.998 [2024-11-20 16:03:02.227794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.255 [2024-11-20 16:03:02.308487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.256 [2024-11-20 16:03:02.365913] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:04.821 16:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.821 16:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:04.821 16:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DlhSamf6Nn 00:15:05.431 16:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:05.431 [2024-11-20 16:03:03.652914] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:05.689 nvme0n1 00:15:05.689 16:03:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:05.689 Running I/O for 1 seconds... 00:15:06.881 3968.00 IOPS, 15.50 MiB/s 00:15:06.881 Latency(us) 00:15:06.881 [2024-11-20T16:03:05.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.881 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:06.881 Verification LBA range: start 0x0 length 0x2000 00:15:06.881 nvme0n1 : 1.03 3984.87 15.57 0.00 0.00 31769.59 7268.54 19422.49 00:15:06.881 [2024-11-20T16:03:05.131Z] =================================================================================================================== 00:15:06.881 [2024-11-20T16:03:05.131Z] Total : 3984.87 15.57 0.00 0.00 31769.59 7268.54 19422.49 00:15:06.881 { 00:15:06.881 "results": [ 00:15:06.881 { 00:15:06.881 "job": "nvme0n1", 00:15:06.881 "core_mask": "0x2", 00:15:06.881 "workload": "verify", 00:15:06.881 "status": "finished", 00:15:06.881 "verify_range": { 00:15:06.881 "start": 0, 00:15:06.881 "length": 8192 00:15:06.881 }, 00:15:06.881 "queue_depth": 128, 00:15:06.881 "io_size": 4096, 00:15:06.881 "runtime": 1.027889, 00:15:06.881 "iops": 3984.866070169055, 00:15:06.881 "mibps": 15.565883086597871, 00:15:06.881 "io_failed": 0, 00:15:06.881 "io_timeout": 0, 00:15:06.881 "avg_latency_us": 31769.585454545453, 00:15:06.881 "min_latency_us": 7268.538181818182, 00:15:06.881 "max_latency_us": 19422.487272727274 00:15:06.881 } 00:15:06.881 ], 00:15:06.881 "core_count": 1 00:15:06.881 } 00:15:06.882 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:06.882 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.882 16:03:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.882 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.882 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:06.882 "subsystems": [ 00:15:06.882 { 00:15:06.882 "subsystem": "keyring", 00:15:06.882 "config": [ 00:15:06.882 { 00:15:06.882 "method": "keyring_file_add_key", 00:15:06.882 "params": { 00:15:06.882 "name": "key0", 00:15:06.882 "path": "/tmp/tmp.DlhSamf6Nn" 00:15:06.882 } 00:15:06.882 } 00:15:06.882 ] 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "subsystem": "iobuf", 00:15:06.882 "config": [ 00:15:06.882 { 00:15:06.882 "method": "iobuf_set_options", 00:15:06.882 "params": { 00:15:06.882 "small_pool_count": 8192, 00:15:06.882 "large_pool_count": 1024, 00:15:06.882 "small_bufsize": 8192, 00:15:06.882 "large_bufsize": 135168, 00:15:06.882 "enable_numa": false 00:15:06.882 } 00:15:06.882 } 00:15:06.882 ] 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "subsystem": "sock", 00:15:06.882 "config": [ 00:15:06.882 { 00:15:06.882 "method": "sock_set_default_impl", 00:15:06.882 "params": { 00:15:06.882 "impl_name": "uring" 00:15:06.882 } 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "method": "sock_impl_set_options", 00:15:06.882 "params": { 00:15:06.882 "impl_name": "ssl", 00:15:06.882 "recv_buf_size": 4096, 00:15:06.882 "send_buf_size": 4096, 00:15:06.882 "enable_recv_pipe": true, 00:15:06.882 "enable_quickack": false, 00:15:06.882 "enable_placement_id": 0, 00:15:06.882 "enable_zerocopy_send_server": true, 00:15:06.882 "enable_zerocopy_send_client": false, 00:15:06.882 "zerocopy_threshold": 0, 00:15:06.882 "tls_version": 0, 00:15:06.882 "enable_ktls": false 00:15:06.882 } 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "method": "sock_impl_set_options", 00:15:06.882 "params": { 00:15:06.882 "impl_name": "posix", 00:15:06.882 "recv_buf_size": 2097152, 00:15:06.882 "send_buf_size": 2097152, 00:15:06.882 "enable_recv_pipe": true, 00:15:06.882 "enable_quickack": false, 00:15:06.882 "enable_placement_id": 0, 00:15:06.882 "enable_zerocopy_send_server": true, 00:15:06.882 "enable_zerocopy_send_client": false, 00:15:06.882 "zerocopy_threshold": 0, 00:15:06.882 "tls_version": 0, 00:15:06.882 "enable_ktls": false 00:15:06.882 } 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "method": "sock_impl_set_options", 00:15:06.882 "params": { 00:15:06.882 "impl_name": "uring", 00:15:06.882 "recv_buf_size": 2097152, 00:15:06.882 "send_buf_size": 2097152, 00:15:06.882 "enable_recv_pipe": true, 00:15:06.882 "enable_quickack": false, 00:15:06.882 "enable_placement_id": 0, 00:15:06.882 "enable_zerocopy_send_server": false, 00:15:06.882 "enable_zerocopy_send_client": false, 00:15:06.882 "zerocopy_threshold": 0, 00:15:06.882 "tls_version": 0, 00:15:06.882 "enable_ktls": false 00:15:06.882 } 00:15:06.882 } 00:15:06.882 ] 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "subsystem": "vmd", 00:15:06.882 "config": [] 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "subsystem": "accel", 00:15:06.882 "config": [ 00:15:06.882 { 00:15:06.882 "method": "accel_set_options", 00:15:06.882 "params": { 00:15:06.882 "small_cache_size": 128, 00:15:06.882 "large_cache_size": 16, 00:15:06.882 "task_count": 2048, 00:15:06.882 "sequence_count": 2048, 00:15:06.882 "buf_count": 2048 00:15:06.882 } 00:15:06.882 } 00:15:06.882 ] 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "subsystem": "bdev", 00:15:06.882 "config": [ 00:15:06.882 { 00:15:06.882 "method": "bdev_set_options", 00:15:06.882 "params": { 00:15:06.882 "bdev_io_pool_size": 65535, 00:15:06.882 "bdev_io_cache_size": 256, 00:15:06.882 "bdev_auto_examine": true, 00:15:06.882 "iobuf_small_cache_size": 128, 00:15:06.882 "iobuf_large_cache_size": 16 00:15:06.882 } 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "method": "bdev_raid_set_options", 00:15:06.882 "params": { 00:15:06.882 "process_window_size_kb": 1024, 00:15:06.882 "process_max_bandwidth_mb_sec": 0 00:15:06.882 } 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "method": "bdev_iscsi_set_options", 00:15:06.882 "params": { 00:15:06.882 "timeout_sec": 30 00:15:06.882 } 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "method": "bdev_nvme_set_options", 00:15:06.882 "params": { 00:15:06.882 "action_on_timeout": "none", 00:15:06.882 "timeout_us": 0, 00:15:06.882 "timeout_admin_us": 0, 00:15:06.882 "keep_alive_timeout_ms": 10000, 00:15:06.882 "arbitration_burst": 0, 00:15:06.882 "low_priority_weight": 0, 00:15:06.882 "medium_priority_weight": 0, 00:15:06.882 "high_priority_weight": 0, 00:15:06.882 "nvme_adminq_poll_period_us": 10000, 00:15:06.882 "nvme_ioq_poll_period_us": 0, 00:15:06.882 "io_queue_requests": 0, 00:15:06.882 "delay_cmd_submit": true, 00:15:06.882 "transport_retry_count": 4, 00:15:06.882 "bdev_retry_count": 3, 00:15:06.882 "transport_ack_timeout": 0, 00:15:06.882 "ctrlr_loss_timeout_sec": 0, 00:15:06.882 "reconnect_delay_sec": 0, 00:15:06.882 "fast_io_fail_timeout_sec": 0, 00:15:06.882 "disable_auto_failback": false, 00:15:06.882 "generate_uuids": false, 00:15:06.882 "transport_tos": 0, 00:15:06.882 "nvme_error_stat": false, 00:15:06.882 "rdma_srq_size": 0, 00:15:06.882 "io_path_stat": false, 00:15:06.882 "allow_accel_sequence": false, 00:15:06.882 "rdma_max_cq_size": 0, 00:15:06.882 "rdma_cm_event_timeout_ms": 0, 00:15:06.882 "dhchap_digests": [ 00:15:06.882 "sha256", 00:15:06.882 "sha384", 00:15:06.882 "sha512" 00:15:06.882 ], 00:15:06.882 "dhchap_dhgroups": [ 00:15:06.882 "null", 00:15:06.882 "ffdhe2048", 00:15:06.882 "ffdhe3072", 00:15:06.882 "ffdhe4096", 00:15:06.882 "ffdhe6144", 00:15:06.882 "ffdhe8192" 00:15:06.882 ] 00:15:06.882 } 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "method": "bdev_nvme_set_hotplug", 00:15:06.882 "params": { 00:15:06.882 "period_us": 100000, 00:15:06.882 "enable": false 00:15:06.882 } 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "method": "bdev_malloc_create", 00:15:06.882 "params": { 00:15:06.882 "name": "malloc0", 00:15:06.882 "num_blocks": 8192, 00:15:06.882 "block_size": 4096, 00:15:06.882 "physical_block_size": 4096, 00:15:06.882 "uuid": "66dfd157-d7a7-4993-a0d4-0ca3123428b9", 00:15:06.882 "optimal_io_boundary": 0, 00:15:06.882 "md_size": 0, 00:15:06.882 "dif_type": 0, 00:15:06.882 "dif_is_head_of_md": false, 00:15:06.882 "dif_pi_format": 0 00:15:06.882 } 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "method": "bdev_wait_for_examine" 00:15:06.882 } 00:15:06.882 ] 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "subsystem": "nbd", 00:15:06.882 "config": [] 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "subsystem": "scheduler", 00:15:06.882 "config": [ 00:15:06.882 { 00:15:06.882 "method": "framework_set_scheduler", 00:15:06.882 "params": { 00:15:06.882 "name": "static" 00:15:06.882 } 00:15:06.882 } 00:15:06.882 ] 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "subsystem": "nvmf", 00:15:06.882 "config": [ 00:15:06.882 { 00:15:06.882 "method": "nvmf_set_config", 00:15:06.882 "params": { 00:15:06.882 "discovery_filter": "match_any", 00:15:06.882 "admin_cmd_passthru": { 00:15:06.882 "identify_ctrlr": false 00:15:06.882 }, 00:15:06.882 "dhchap_digests": [ 00:15:06.882 "sha256", 00:15:06.882 "sha384", 00:15:06.882 "sha512" 00:15:06.882 ], 00:15:06.882 "dhchap_dhgroups": [ 00:15:06.882 "null", 00:15:06.882 "ffdhe2048", 00:15:06.882 "ffdhe3072", 00:15:06.882 "ffdhe4096", 00:15:06.882 "ffdhe6144", 00:15:06.882 "ffdhe8192" 00:15:06.882 ] 00:15:06.882 } 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "method": "nvmf_set_max_subsystems", 00:15:06.882 "params": { 00:15:06.882 "max_subsystems": 1024 00:15:06.882 } 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "method": "nvmf_set_crdt", 00:15:06.882 "params": { 00:15:06.882 "crdt1": 0, 00:15:06.882 "crdt2": 0, 00:15:06.882 "crdt3": 0 00:15:06.882 } 00:15:06.882 }, 00:15:06.882 { 00:15:06.882 "method": "nvmf_create_transport", 00:15:06.882 "params": { 00:15:06.882 "trtype": "TCP", 00:15:06.882 "max_queue_depth": 128, 00:15:06.882 "max_io_qpairs_per_ctrlr": 127, 00:15:06.882 "in_capsule_data_size": 4096, 00:15:06.882 "max_io_size": 131072, 00:15:06.882 "io_unit_size": 131072, 00:15:06.882 "max_aq_depth": 128, 00:15:06.882 "num_shared_buffers": 511, 00:15:06.882 "buf_cache_size": 4294967295, 00:15:06.883 "dif_insert_or_strip": false, 00:15:06.883 "zcopy": false, 00:15:06.883 "c2h_success": false, 00:15:06.883 "sock_priority": 0, 00:15:06.883 "abort_timeout_sec": 1, 00:15:06.883 "ack_timeout": 0, 00:15:06.883 "data_wr_pool_size": 0 00:15:06.883 } 00:15:06.883 }, 00:15:06.883 { 00:15:06.883 "method": "nvmf_create_subsystem", 00:15:06.883 "params": { 00:15:06.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.883 "allow_any_host": false, 00:15:06.883 "serial_number": "00000000000000000000", 00:15:06.883 "model_number": "SPDK bdev Controller", 00:15:06.883 "max_namespaces": 32, 00:15:06.883 "min_cntlid": 1, 00:15:06.883 "max_cntlid": 65519, 00:15:06.883 "ana_reporting": false 00:15:06.883 } 00:15:06.883 }, 00:15:06.883 { 00:15:06.883 "method": "nvmf_subsystem_add_host", 00:15:06.883 "params": { 00:15:06.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.883 "host": "nqn.2016-06.io.spdk:host1", 00:15:06.883 "psk": "key0" 00:15:06.883 } 00:15:06.883 }, 00:15:06.883 { 00:15:06.883 "method": "nvmf_subsystem_add_ns", 00:15:06.883 "params": { 00:15:06.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.883 "namespace": { 00:15:06.883 "nsid": 1, 00:15:06.883 "bdev_name": "malloc0", 00:15:06.883 "nguid": "66DFD157D7A74993A0D40CA3123428B9", 00:15:06.883 "uuid": "66dfd157-d7a7-4993-a0d4-0ca3123428b9", 00:15:06.883 "no_auto_visible": false 00:15:06.883 } 00:15:06.883 } 00:15:06.883 }, 00:15:06.883 { 00:15:06.883 "method": "nvmf_subsystem_add_listener", 00:15:06.883 "params": { 00:15:06.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.883 "listen_address": { 00:15:06.883 "trtype": "TCP", 00:15:06.883 "adrfam": "IPv4", 00:15:06.883 "traddr": "10.0.0.3", 00:15:06.883 "trsvcid": "4420" 00:15:06.883 }, 00:15:06.883 "secure_channel": false, 00:15:06.883 "sock_impl": "ssl" 00:15:06.883 } 00:15:06.883 } 00:15:06.883 ] 00:15:06.883 } 00:15:06.883 ] 00:15:06.883 }' 00:15:06.883 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:07.449 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:07.449 "subsystems": [ 00:15:07.449 { 00:15:07.449 "subsystem": "keyring", 00:15:07.449 "config": [ 00:15:07.449 { 00:15:07.449 "method": "keyring_file_add_key", 00:15:07.449 "params": { 00:15:07.449 "name": "key0", 00:15:07.449 "path": "/tmp/tmp.DlhSamf6Nn" 00:15:07.449 } 00:15:07.449 } 00:15:07.449 ] 00:15:07.449 }, 00:15:07.449 { 00:15:07.449 "subsystem": "iobuf", 00:15:07.449 "config": [ 00:15:07.449 { 00:15:07.449 "method": "iobuf_set_options", 00:15:07.449 "params": { 00:15:07.449 "small_pool_count": 8192, 00:15:07.449 "large_pool_count": 1024, 00:15:07.449 "small_bufsize": 8192, 00:15:07.449 "large_bufsize": 135168, 00:15:07.449 "enable_numa": false 00:15:07.449 } 00:15:07.449 } 00:15:07.449 ] 00:15:07.449 }, 00:15:07.449 { 00:15:07.449 "subsystem": "sock", 00:15:07.449 "config": [ 00:15:07.449 { 00:15:07.449 "method": "sock_set_default_impl", 00:15:07.449 "params": { 00:15:07.449 "impl_name": "uring" 00:15:07.449 } 00:15:07.449 }, 00:15:07.449 { 00:15:07.449 "method": "sock_impl_set_options", 00:15:07.449 "params": { 00:15:07.449 "impl_name": "ssl", 00:15:07.449 "recv_buf_size": 4096, 00:15:07.449 "send_buf_size": 4096, 00:15:07.449 "enable_recv_pipe": true, 00:15:07.449 "enable_quickack": false, 00:15:07.449 "enable_placement_id": 0, 00:15:07.449 "enable_zerocopy_send_server": true, 00:15:07.449 "enable_zerocopy_send_client": false, 00:15:07.449 "zerocopy_threshold": 0, 00:15:07.449 "tls_version": 0, 00:15:07.449 "enable_ktls": false 00:15:07.449 } 00:15:07.449 }, 00:15:07.449 { 00:15:07.449 "method": "sock_impl_set_options", 00:15:07.449 "params": { 00:15:07.449 "impl_name": "posix", 00:15:07.449 "recv_buf_size": 2097152, 00:15:07.449 "send_buf_size": 2097152, 00:15:07.449 "enable_recv_pipe": true, 00:15:07.449 "enable_quickack": false, 00:15:07.449 "enable_placement_id": 0, 00:15:07.449 "enable_zerocopy_send_server": true, 00:15:07.449 "enable_zerocopy_send_client": false, 00:15:07.449 "zerocopy_threshold": 0, 00:15:07.449 "tls_version": 0, 00:15:07.449 "enable_ktls": false 00:15:07.449 } 00:15:07.449 }, 00:15:07.449 { 00:15:07.449 "method": "sock_impl_set_options", 00:15:07.449 "params": { 00:15:07.449 "impl_name": "uring", 00:15:07.449 "recv_buf_size": 2097152, 00:15:07.449 "send_buf_size": 2097152, 00:15:07.449 "enable_recv_pipe": true, 00:15:07.449 "enable_quickack": false, 00:15:07.449 "enable_placement_id": 0, 00:15:07.449 "enable_zerocopy_send_server": false, 00:15:07.449 "enable_zerocopy_send_client": false, 00:15:07.449 "zerocopy_threshold": 0, 00:15:07.449 "tls_version": 0, 00:15:07.449 "enable_ktls": false 00:15:07.449 } 00:15:07.449 } 00:15:07.449 ] 00:15:07.449 }, 00:15:07.449 { 00:15:07.449 "subsystem": "vmd", 00:15:07.449 "config": [] 00:15:07.449 }, 00:15:07.449 { 00:15:07.449 "subsystem": "accel", 00:15:07.450 "config": [ 00:15:07.450 { 00:15:07.450 "method": "accel_set_options", 00:15:07.450 "params": { 00:15:07.450 "small_cache_size": 128, 00:15:07.450 "large_cache_size": 16, 00:15:07.450 "task_count": 2048, 00:15:07.450 "sequence_count": 2048, 00:15:07.450 "buf_count": 2048 00:15:07.450 } 00:15:07.450 } 00:15:07.450 ] 00:15:07.450 }, 00:15:07.450 { 00:15:07.450 "subsystem": "bdev", 00:15:07.450 "config": [ 00:15:07.450 { 00:15:07.450 "method": "bdev_set_options", 00:15:07.450 "params": { 00:15:07.450 "bdev_io_pool_size": 65535, 00:15:07.450 "bdev_io_cache_size": 256, 00:15:07.450 "bdev_auto_examine": true, 00:15:07.450 "iobuf_small_cache_size": 128, 00:15:07.450 "iobuf_large_cache_size": 16 00:15:07.450 } 00:15:07.450 }, 00:15:07.450 { 00:15:07.450 "method": "bdev_raid_set_options", 00:15:07.450 "params": { 00:15:07.450 "process_window_size_kb": 1024, 00:15:07.450 "process_max_bandwidth_mb_sec": 0 00:15:07.450 } 00:15:07.450 }, 00:15:07.450 { 00:15:07.450 "method": "bdev_iscsi_set_options", 00:15:07.450 "params": { 00:15:07.450 "timeout_sec": 30 00:15:07.450 } 00:15:07.450 }, 00:15:07.450 { 00:15:07.450 "method": "bdev_nvme_set_options", 00:15:07.450 "params": { 00:15:07.450 "action_on_timeout": "none", 00:15:07.450 "timeout_us": 0, 00:15:07.450 "timeout_admin_us": 0, 00:15:07.450 "keep_alive_timeout_ms": 10000, 00:15:07.450 "arbitration_burst": 0, 00:15:07.450 "low_priority_weight": 0, 00:15:07.450 "medium_priority_weight": 0, 00:15:07.450 "high_priority_weight": 0, 00:15:07.450 "nvme_adminq_poll_period_us": 10000, 00:15:07.450 "nvme_ioq_poll_period_us": 0, 00:15:07.450 "io_queue_requests": 512, 00:15:07.450 "delay_cmd_submit": true, 00:15:07.450 "transport_retry_count": 4, 00:15:07.450 "bdev_retry_count": 3, 00:15:07.450 "transport_ack_timeout": 0, 00:15:07.450 "ctrlr_loss_timeout_sec": 0, 00:15:07.450 "reconnect_delay_sec": 0, 00:15:07.450 "fast_io_fail_timeout_sec": 0, 00:15:07.450 "disable_auto_failback": false, 00:15:07.450 "generate_uuids": false, 00:15:07.450 "transport_tos": 0, 00:15:07.450 "nvme_error_stat": false, 00:15:07.450 "rdma_srq_size": 0, 00:15:07.450 "io_path_stat": false, 00:15:07.450 "allow_accel_sequence": false, 00:15:07.450 "rdma_max_cq_size": 0, 00:15:07.450 "rdma_cm_event_timeout_ms": 0, 00:15:07.450 "dhchap_digests": [ 00:15:07.450 "sha256", 00:15:07.450 "sha384", 00:15:07.450 "sha512" 00:15:07.450 ], 00:15:07.450 "dhchap_dhgroups": [ 00:15:07.450 "null", 00:15:07.450 "ffdhe2048", 00:15:07.450 "ffdhe3072", 00:15:07.450 "ffdhe4096", 00:15:07.450 "ffdhe6144", 00:15:07.450 "ffdhe8192" 00:15:07.450 ] 00:15:07.450 } 00:15:07.450 }, 00:15:07.450 { 00:15:07.450 "method": "bdev_nvme_attach_controller", 00:15:07.450 "params": { 00:15:07.450 "name": "nvme0", 00:15:07.450 "trtype": "TCP", 00:15:07.450 "adrfam": "IPv4", 00:15:07.450 "traddr": "10.0.0.3", 00:15:07.450 "trsvcid": "4420", 00:15:07.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.450 "prchk_reftag": false, 00:15:07.450 "prchk_guard": false, 00:15:07.450 "ctrlr_loss_timeout_sec": 0, 00:15:07.450 "reconnect_delay_sec": 0, 00:15:07.450 "fast_io_fail_timeout_sec": 0, 00:15:07.450 "psk": "key0", 00:15:07.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:07.450 "hdgst": false, 00:15:07.450 "ddgst": false, 00:15:07.450 "multipath": "multipath" 00:15:07.450 } 00:15:07.450 }, 00:15:07.450 { 00:15:07.450 "method": "bdev_nvme_set_hotplug", 00:15:07.450 "params": { 00:15:07.450 "period_us": 100000, 00:15:07.450 "enable": false 00:15:07.450 } 00:15:07.450 }, 00:15:07.450 { 00:15:07.450 "method": "bdev_enable_histogram", 00:15:07.450 "params": { 00:15:07.450 "name": "nvme0n1", 00:15:07.450 "enable": true 00:15:07.450 } 00:15:07.450 }, 00:15:07.450 { 00:15:07.450 "method": "bdev_wait_for_examine" 00:15:07.450 } 00:15:07.450 ] 00:15:07.450 }, 00:15:07.450 { 00:15:07.450 "subsystem": "nbd", 00:15:07.450 "config": [] 00:15:07.450 } 00:15:07.450 ] 00:15:07.450 }' 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72777 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72777 ']' 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72777 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72777 00:15:07.450 killing process with pid 72777 00:15:07.450 Received shutdown signal, test time was about 1.000000 seconds 00:15:07.450 00:15:07.450 Latency(us) 00:15:07.450 [2024-11-20T16:03:05.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.450 [2024-11-20T16:03:05.700Z] =================================================================================================================== 00:15:07.450 [2024-11-20T16:03:05.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72777' 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72777 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72777 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72745 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72745 ']' 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72745 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72745 00:15:07.450 killing process with pid 72745 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72745' 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72745 00:15:07.450 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72745 00:15:07.708 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:07.708 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:07.708 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:07.708 "subsystems": [ 00:15:07.708 { 00:15:07.708 "subsystem": "keyring", 00:15:07.708 "config": [ 00:15:07.708 { 00:15:07.708 "method": "keyring_file_add_key", 00:15:07.708 "params": { 00:15:07.708 "name": "key0", 00:15:07.708 "path": "/tmp/tmp.DlhSamf6Nn" 00:15:07.708 } 00:15:07.708 } 00:15:07.708 ] 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "subsystem": "iobuf", 00:15:07.708 "config": [ 00:15:07.708 { 00:15:07.708 "method": "iobuf_set_options", 00:15:07.708 "params": { 00:15:07.708 "small_pool_count": 8192, 00:15:07.708 "large_pool_count": 1024, 00:15:07.708 "small_bufsize": 8192, 00:15:07.708 "large_bufsize": 135168, 00:15:07.708 "enable_numa": false 00:15:07.708 } 00:15:07.708 } 00:15:07.708 ] 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "subsystem": "sock", 00:15:07.708 "config": [ 00:15:07.709 { 00:15:07.709 "method": "sock_set_default_impl", 00:15:07.709 "params": { 00:15:07.709 "impl_name": "uring" 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "sock_impl_set_options", 00:15:07.709 "params": { 00:15:07.709 "impl_name": "ssl", 00:15:07.709 "recv_buf_size": 4096, 00:15:07.709 "send_buf_size": 4096, 00:15:07.709 "enable_recv_pipe": true, 00:15:07.709 "enable_quickack": false, 00:15:07.709 "enable_placement_id": 0, 00:15:07.709 "enable_zerocopy_send_server": true, 00:15:07.709 "enable_zerocopy_send_client": false, 00:15:07.709 "zerocopy_threshold": 0, 00:15:07.709 "tls_version": 0, 00:15:07.709 "enable_ktls": false 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "sock_impl_set_options", 00:15:07.709 "params": { 00:15:07.709 "impl_name": "posix", 00:15:07.709 "recv_buf_size": 2097152, 00:15:07.709 "send_buf_size": 2097152, 00:15:07.709 "enable_recv_pipe": true, 00:15:07.709 "enable_quickack": false, 00:15:07.709 "enable_placement_id": 0, 00:15:07.709 "enable_zerocopy_send_server": true, 00:15:07.709 "enable_zerocopy_send_client": false, 00:15:07.709 "zerocopy_threshold": 0, 00:15:07.709 "tls_version": 0, 00:15:07.709 "enable_ktls": false 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "sock_impl_set_options", 00:15:07.709 "params": { 00:15:07.709 "impl_name": "uring", 00:15:07.709 "recv_buf_size": 2097152, 00:15:07.709 "send_buf_size": 2097152, 00:15:07.709 "enable_recv_pipe": true, 00:15:07.709 "enable_quickack": false, 00:15:07.709 "enable_placement_id": 0, 00:15:07.709 "enable_zerocopy_send_server": false, 00:15:07.709 "enable_zerocopy_send_client": false, 00:15:07.709 "zerocopy_threshold": 0, 00:15:07.709 "tls_version": 0, 00:15:07.709 "enable_ktls": false 00:15:07.709 } 00:15:07.709 } 00:15:07.709 ] 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "subsystem": "vmd", 00:15:07.709 "config": [] 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "subsystem": "accel", 00:15:07.709 "config": [ 00:15:07.709 { 00:15:07.709 "method": "accel_set_options", 00:15:07.709 "params": { 00:15:07.709 "small_cache_size": 128, 00:15:07.709 "large_cache_size": 16, 00:15:07.709 "task_count": 2048, 00:15:07.709 "sequence_count": 2048, 00:15:07.709 "buf_count": 2048 00:15:07.709 } 00:15:07.709 } 00:15:07.709 ] 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "subsystem": "bdev", 00:15:07.709 "config": [ 00:15:07.709 { 00:15:07.709 "method": "bdev_set_options", 00:15:07.709 "params": { 00:15:07.709 "bdev_io_pool_size": 65535, 00:15:07.709 "bdev_io_cache_size": 256, 00:15:07.709 "bdev_auto_examine": true, 00:15:07.709 "iobuf_small_cache_size": 128, 00:15:07.709 "iobuf_large_cache_size": 16 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "bdev_raid_set_options", 00:15:07.709 "params": { 00:15:07.709 "process_window_size_kb": 1024, 00:15:07.709 "process_max_bandwidth_mb_sec": 0 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "bdev_iscsi_set_options", 00:15:07.709 "params": { 00:15:07.709 "timeout_sec": 30 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "bdev_nvme_set_options", 00:15:07.709 "params": { 00:15:07.709 "action_on_timeout": "none", 00:15:07.709 "timeout_us": 0, 00:15:07.709 "timeout_admin_us": 0, 00:15:07.709 "keep_alive_timeout_ms": 10000, 00:15:07.709 "arbitration_burst": 0, 00:15:07.709 "low_priority_weight": 0, 00:15:07.709 "medium_priority_weight": 0, 00:15:07.709 "high_priority_weight": 0, 00:15:07.709 "nvme_adminq_poll_period_us": 10000, 00:15:07.709 "nvme_ioq_poll_period_us": 0, 00:15:07.709 "io_queue_requests": 0, 00:15:07.709 "delay_cmd_submit": true, 00:15:07.709 "transport_retry_count": 4, 00:15:07.709 "bdev_retry_count": 3, 00:15:07.709 "transport_ack_timeout": 0, 00:15:07.709 "ctrlr_loss_timeout_sec": 0, 00:15:07.709 "reconnect_delay_sec": 0, 00:15:07.709 "fast_io_fail_timeout_sec": 0, 00:15:07.709 "disable_auto_failback": false, 00:15:07.709 "generate_uuids": false, 00:15:07.709 "transport_tos": 0, 00:15:07.709 "nvme_error_stat": false, 00:15:07.709 "rdma_srq_size": 0, 00:15:07.709 "io_path_stat": false, 00:15:07.709 "allow_accel_sequence": false, 00:15:07.709 "rdma_max_cq_size": 0, 00:15:07.709 "rdma_cm_event_timeout_ms": 0, 00:15:07.709 "dhchap_digests": [ 00:15:07.709 "sha256", 00:15:07.709 "sha384", 00:15:07.709 "sha512" 00:15:07.709 ], 00:15:07.709 "dhchap_dhgroups": [ 00:15:07.709 "null", 00:15:07.709 "ffdhe2048", 00:15:07.709 "ffdhe3072", 00:15:07.709 "ffdhe4096", 00:15:07.709 "ffdhe6144", 00:15:07.709 "ffdhe8192" 00:15:07.709 ] 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "bdev_nvme_set_hotplug", 00:15:07.709 "params": { 00:15:07.709 "period_us": 100000, 00:15:07.709 "enable": false 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "bdev_malloc_create", 00:15:07.709 "params": { 00:15:07.709 "name": "malloc0", 00:15:07.709 "num_blocks": 8192, 00:15:07.709 "block_size": 4096, 00:15:07.709 "physical_block_size": 4096, 00:15:07.709 "uuid": "66dfd157-d7a7-4993-a0d4-0ca3123428b9", 00:15:07.709 "optimal_io_boundary": 0, 00:15:07.709 "md_size": 0, 00:15:07.709 "dif_type": 0, 00:15:07.709 "dif_is_head_of_md": false, 00:15:07.709 "dif_pi_format": 0 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "bdev_wait_for_examine" 00:15:07.709 } 00:15:07.709 ] 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "subsystem": "nbd", 00:15:07.709 "config": [] 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "subsystem": "scheduler", 00:15:07.709 "config": [ 00:15:07.709 { 00:15:07.709 "method": "framework_set_scheduler", 00:15:07.709 "params": { 00:15:07.709 "name": "static" 00:15:07.709 } 00:15:07.709 } 00:15:07.709 ] 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "subsystem": "nvmf", 00:15:07.709 "config": [ 00:15:07.709 { 00:15:07.709 "method": "nvmf_set_config", 00:15:07.709 "params": { 00:15:07.709 "discovery_filter": "match_any", 00:15:07.709 "admin_cmd_passthru": { 00:15:07.709 "identify_ctrlr": false 00:15:07.709 }, 00:15:07.709 "dhchap_digests": [ 00:15:07.709 "sha256", 00:15:07.709 "sha384", 00:15:07.709 "sha512" 00:15:07.709 ], 00:15:07.709 "dhchap_dhgroups": [ 00:15:07.709 "null", 00:15:07.709 "ffdhe2048", 00:15:07.709 "ffdhe3072", 00:15:07.709 "ffdhe4096", 00:15:07.709 "ffdhe6144", 00:15:07.709 "ffdhe8192" 00:15:07.709 ] 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "nvmf_set_max_subsystems", 00:15:07.709 "params": { 00:15:07.709 "max_subsystems": 1024 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "nvmf_set_crdt", 00:15:07.709 "params": { 00:15:07.709 "crdt1": 0, 00:15:07.709 "crdt2": 0, 00:15:07.709 "crdt3": 0 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "nvmf_create_transport", 00:15:07.709 "params": { 00:15:07.709 "trtype": "TCP", 00:15:07.709 "max_queue_depth": 128, 00:15:07.709 "max_io_qpairs_per_ctrlr": 127, 00:15:07.709 "in_capsule_data_size": 4096, 00:15:07.709 "max_io_size": 131072, 00:15:07.709 "io_unit_size": 131072, 00:15:07.709 "max_aq_depth": 128, 00:15:07.709 "num_shared_buffers": 511, 00:15:07.709 "buf_cache_size": 4294967295, 00:15:07.709 "dif_insert_or_strip": false, 00:15:07.709 "zcopy": false, 00:15:07.709 "c2h_success": false, 00:15:07.709 "sock_priority": 0, 00:15:07.709 "abort_timeout_sec": 1, 00:15:07.709 "ack_timeout": 0, 00:15:07.709 "data_wr_pool_size": 0 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "nvmf_create_subsystem", 00:15:07.709 "params": { 00:15:07.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.709 "allow_any_host": false, 00:15:07.709 "serial_number": "00000000000000000000", 00:15:07.709 "model_number": "SPDK bdev Controller", 00:15:07.709 "max_namespaces": 32, 00:15:07.709 "min_cntlid": 1, 00:15:07.709 "max_cntlid": 65519, 00:15:07.709 "ana_reporting": false 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "nvmf_subsystem_add_host", 00:15:07.709 "params": { 00:15:07.709 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.709 "host": "nqn.2016-06.io.spdk:host1", 00:15:07.709 "psk": "key0" 00:15:07.709 } 00:15:07.709 }, 00:15:07.709 { 00:15:07.709 "method": "nvmf_subsystem_add_ns", 00:15:07.710 "params": { 00:15:07.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.710 "namespace": { 00:15:07.710 "nsid": 1, 00:15:07.710 "bdev_name": "malloc0", 00:15:07.710 "nguid": "66DFD157D7A74993A0D40CA3123428B9", 00:15:07.710 "uuid": "66dfd157-d7a7-4993-a0d4-0ca3123428b9", 00:15:07.710 "no_auto_visible": false 00:15:07.710 } 00:15:07.710 } 00:15:07.710 }, 00:15:07.710 { 00:15:07.710 "method": "nvmf_subsystem_add_listener", 00:15:07.710 "params": { 00:15:07.710 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.710 "listen_address": { 00:15:07.710 "trtype": "TCP", 00:15:07.710 "adrfam": "IPv4", 00:15:07.710 "traddr": "10.0.0.3", 00:15:07.710 "trsvcid": "4420" 00:15:07.710 }, 00:15:07.710 "secure_channel": false, 00:15:07.710 "sock_impl": "ssl" 00:15:07.710 } 00:15:07.710 } 00:15:07.710 ] 00:15:07.710 } 00:15:07.710 ] 00:15:07.710 }' 00:15:07.710 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:07.710 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.710 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72838 00:15:07.710 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:07.710 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72838 00:15:07.710 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72838 ']' 00:15:07.710 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.710 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.710 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.710 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.710 16:03:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.710 [2024-11-20 16:03:05.940671] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:07.710 [2024-11-20 16:03:05.940785] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.967 [2024-11-20 16:03:06.093380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.967 [2024-11-20 16:03:06.162564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.967 [2024-11-20 16:03:06.162622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.967 [2024-11-20 16:03:06.162637] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.967 [2024-11-20 16:03:06.162647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.967 [2024-11-20 16:03:06.162657] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.967 [2024-11-20 16:03:06.163170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.225 [2024-11-20 16:03:06.330652] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.225 [2024-11-20 16:03:06.413682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.225 [2024-11-20 16:03:06.445635] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:08.225 [2024-11-20 16:03:06.445899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72870 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72870 /var/tmp/bdevperf.sock 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 72870 ']' 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.791 16:03:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:08.791 "subsystems": [ 00:15:08.791 { 00:15:08.791 "subsystem": "keyring", 00:15:08.791 "config": [ 00:15:08.791 { 00:15:08.791 "method": "keyring_file_add_key", 00:15:08.791 "params": { 00:15:08.791 "name": "key0", 00:15:08.791 "path": "/tmp/tmp.DlhSamf6Nn" 00:15:08.791 } 00:15:08.791 } 00:15:08.791 ] 00:15:08.791 }, 00:15:08.791 { 00:15:08.791 "subsystem": "iobuf", 00:15:08.791 "config": [ 00:15:08.791 { 00:15:08.791 "method": "iobuf_set_options", 00:15:08.791 "params": { 00:15:08.791 "small_pool_count": 8192, 00:15:08.791 "large_pool_count": 1024, 00:15:08.791 "small_bufsize": 8192, 00:15:08.791 "large_bufsize": 135168, 00:15:08.791 "enable_numa": false 00:15:08.791 } 00:15:08.791 } 00:15:08.791 ] 00:15:08.791 }, 00:15:08.791 { 00:15:08.791 "subsystem": "sock", 00:15:08.791 "config": [ 00:15:08.791 { 00:15:08.791 "method": "sock_set_default_impl", 00:15:08.791 "params": { 00:15:08.791 "impl_name": "uring" 00:15:08.791 } 00:15:08.791 }, 00:15:08.791 { 00:15:08.791 "method": "sock_impl_set_options", 00:15:08.792 "params": { 00:15:08.792 "impl_name": "ssl", 00:15:08.792 "recv_buf_size": 4096, 00:15:08.792 "send_buf_size": 4096, 00:15:08.792 "enable_recv_pipe": true, 00:15:08.792 "enable_quickack": false, 00:15:08.792 "enable_placement_id": 0, 00:15:08.792 "enable_zerocopy_send_server": true, 00:15:08.792 "enable_zerocopy_send_client": false, 00:15:08.792 "zerocopy_threshold": 0, 00:15:08.792 "tls_version": 0, 00:15:08.792 "enable_ktls": false 00:15:08.792 } 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "method": "sock_impl_set_options", 00:15:08.792 "params": { 00:15:08.792 "impl_name": "posix", 00:15:08.792 "recv_buf_size": 2097152, 00:15:08.792 "send_buf_size": 2097152, 00:15:08.792 "enable_recv_pipe": true, 00:15:08.792 "enable_quickack": false, 00:15:08.792 "enable_placement_id": 0, 00:15:08.792 "enable_zerocopy_send_server": true, 00:15:08.792 "enable_zerocopy_send_client": false, 00:15:08.792 "zerocopy_threshold": 0, 00:15:08.792 "tls_version": 0, 00:15:08.792 "enable_ktls": false 00:15:08.792 } 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "method": "sock_impl_set_options", 00:15:08.792 "params": { 00:15:08.792 "impl_name": "uring", 00:15:08.792 "recv_buf_size": 2097152, 00:15:08.792 "send_buf_size": 2097152, 00:15:08.792 "enable_recv_pipe": true, 00:15:08.792 "enable_quickack": false, 00:15:08.792 "enable_placement_id": 0, 00:15:08.792 "enable_zerocopy_send_server": false, 00:15:08.792 "enable_zerocopy_send_client": false, 00:15:08.792 "zerocopy_threshold": 0, 00:15:08.792 "tls_version": 0, 00:15:08.792 "enable_ktls": false 00:15:08.792 } 00:15:08.792 } 00:15:08.792 ] 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "subsystem": "vmd", 00:15:08.792 "config": [] 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "subsystem": "accel", 00:15:08.792 "config": [ 00:15:08.792 { 00:15:08.792 "method": "accel_set_options", 00:15:08.792 "params": { 00:15:08.792 "small_cache_size": 128, 00:15:08.792 "large_cache_size": 16, 00:15:08.792 "task_count": 2048, 00:15:08.792 "sequence_count": 2048, 00:15:08.792 "buf_count": 2048 00:15:08.792 } 00:15:08.792 } 00:15:08.792 ] 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "subsystem": "bdev", 00:15:08.792 "config": [ 00:15:08.792 { 00:15:08.792 "method": "bdev_set_options", 00:15:08.792 "params": { 00:15:08.792 "bdev_io_pool_size": 65535, 00:15:08.792 "bdev_io_cache_size": 256, 00:15:08.792 "bdev_auto_examine": true, 00:15:08.792 "iobuf_small_cache_size": 128, 00:15:08.792 "iobuf_large_cache_size": 16 00:15:08.792 } 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "method": "bdev_raid_set_options", 00:15:08.792 "params": { 00:15:08.792 "process_window_size_kb": 1024, 00:15:08.792 "process_max_bandwidth_mb_sec": 0 00:15:08.792 } 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "method": "bdev_iscsi_set_options", 00:15:08.792 "params": { 00:15:08.792 "timeout_sec": 30 00:15:08.792 } 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "method": "bdev_nvme_set_options", 00:15:08.792 "params": { 00:15:08.792 "action_on_timeout": "none", 00:15:08.792 "timeout_us": 0, 00:15:08.792 "timeout_admin_us": 0, 00:15:08.792 "keep_alive_timeout_ms": 10000, 00:15:08.792 "arbitration_burst": 0, 00:15:08.792 "low_priority_weight": 0, 00:15:08.792 "medium_priority_weight": 0, 00:15:08.792 "high_priority_weight": 0, 00:15:08.792 "nvme_adminq_poll_period_us": 10000, 00:15:08.792 "nvme_ioq_poll_period_us": 0, 00:15:08.792 "io_queue_requests": 512, 00:15:08.792 "delay_cmd_submit": true, 00:15:08.792 "transport_retry_count": 4, 00:15:08.792 "bdev_retry_count": 3, 00:15:08.792 "transport_ack_timeout": 0, 00:15:08.792 "ctrlr_loss_timeout_sec": 0, 00:15:08.792 "reconnect_delay_sec": 0, 00:15:08.792 "fast_io_fail_timeout_sec": 0, 00:15:08.792 "disable_auto_failback": false, 00:15:08.792 "generate_uuids": false, 00:15:08.792 "transport_tos": 0, 00:15:08.792 "nvme_error_stat": false, 00:15:08.792 "rdma_srq_size": 0, 00:15:08.792 "io_path_stat": false, 00:15:08.792 "allow_accel_sequence": false, 00:15:08.792 "rdma_max_cq_size": 0, 00:15:08.792 "rdma_cm_event_timeout_ms": 0, 00:15:08.792 "dhchap_digests": [ 00:15:08.792 "sha256", 00:15:08.792 "sha384", 00:15:08.792 "sha512" 00:15:08.792 ], 00:15:08.792 "dhchap_dhgroups": [ 00:15:08.792 "null", 00:15:08.792 "ffdhe2048", 00:15:08.792 "ffdhe3072", 00:15:08.792 "ffdhe4096", 00:15:08.792 "ffdhe6144", 00:15:08.792 "ffdhe8192" 00:15:08.792 ] 00:15:08.792 } 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "method": "bdev_nvme_attach_controller", 00:15:08.792 "params": { 00:15:08.792 "name": "nvme0", 00:15:08.792 "trtype": "TCP", 00:15:08.792 "adrfam": "IPv4", 00:15:08.792 "traddr": "10.0.0.3", 00:15:08.792 "trsvcid": "4420", 00:15:08.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.792 "prchk_reftag": false, 00:15:08.792 "prchk_guard": false, 00:15:08.792 "ctrlr_loss_timeout_sec": 0, 00:15:08.792 "reconnect_delay_sec": 0, 00:15:08.792 "fast_io_fail_timeout_sec": 0, 00:15:08.792 "psk": "key0", 00:15:08.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:08.792 "hdgst": false, 00:15:08.792 "ddgst": false, 00:15:08.792 "multipath": "multipath" 00:15:08.792 } 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "method": "bdev_nvme_set_hotplug", 00:15:08.792 "params": { 00:15:08.792 "period_us": 100000, 00:15:08.792 "enable": false 00:15:08.792 } 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "method": "bdev_enable_histogram", 00:15:08.792 "params": { 00:15:08.792 "name": "nvme0n1", 00:15:08.792 "enable": true 00:15:08.792 } 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "method": "bdev_wait_for_examine" 00:15:08.792 } 00:15:08.792 ] 00:15:08.792 }, 00:15:08.792 { 00:15:08.792 "subsystem": "nbd", 00:15:08.792 "config": [] 00:15:08.792 } 00:15:08.792 ] 00:15:08.792 }' 00:15:09.049 [2024-11-20 16:03:07.063468] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:09.049 [2024-11-20 16:03:07.063920] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72870 ] 00:15:09.049 [2024-11-20 16:03:07.215272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.049 [2024-11-20 16:03:07.275943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.306 [2024-11-20 16:03:07.415135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:09.306 [2024-11-20 16:03:07.466826] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:09.873 16:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.873 16:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:09.873 16:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:09.873 16:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:10.130 16:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.130 16:03:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:10.388 Running I/O for 1 seconds... 00:15:11.342 3753.00 IOPS, 14.66 MiB/s 00:15:11.342 Latency(us) 00:15:11.342 [2024-11-20T16:03:09.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.343 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:11.343 Verification LBA range: start 0x0 length 0x2000 00:15:11.343 nvme0n1 : 1.02 3806.26 14.87 0.00 0.00 33229.63 6076.97 27405.96 00:15:11.343 [2024-11-20T16:03:09.593Z] =================================================================================================================== 00:15:11.343 [2024-11-20T16:03:09.593Z] Total : 3806.26 14.87 0.00 0.00 33229.63 6076.97 27405.96 00:15:11.343 { 00:15:11.343 "results": [ 00:15:11.343 { 00:15:11.343 "job": "nvme0n1", 00:15:11.343 "core_mask": "0x2", 00:15:11.343 "workload": "verify", 00:15:11.343 "status": "finished", 00:15:11.343 "verify_range": { 00:15:11.343 "start": 0, 00:15:11.343 "length": 8192 00:15:11.343 }, 00:15:11.343 "queue_depth": 128, 00:15:11.343 "io_size": 4096, 00:15:11.343 "runtime": 1.019898, 00:15:11.343 "iops": 3806.2629792391003, 00:15:11.343 "mibps": 14.868214762652736, 00:15:11.343 "io_failed": 0, 00:15:11.343 "io_timeout": 0, 00:15:11.343 "avg_latency_us": 33229.627314879865, 00:15:11.343 "min_latency_us": 6076.9745454545455, 00:15:11.343 "max_latency_us": 27405.963636363635 00:15:11.343 } 00:15:11.343 ], 00:15:11.343 "core_count": 1 00:15:11.343 } 00:15:11.343 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:11.343 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:11.343 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:11.343 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:15:11.343 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:15:11.343 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:11.343 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:11.343 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:11.343 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:11.343 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:11.343 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:11.343 nvmf_trace.0 00:15:11.601 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:15:11.601 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72870 00:15:11.601 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72870 ']' 00:15:11.601 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72870 00:15:11.601 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:11.601 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.601 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72870 00:15:11.601 killing process with pid 72870 00:15:11.601 Received shutdown signal, test time was about 1.000000 seconds 00:15:11.601 00:15:11.601 Latency(us) 00:15:11.601 [2024-11-20T16:03:09.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.601 [2024-11-20T16:03:09.851Z] =================================================================================================================== 00:15:11.601 [2024-11-20T16:03:09.851Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:11.601 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:11.601 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:11.601 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72870' 00:15:11.601 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72870 00:15:11.601 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72870 00:15:11.860 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:11.860 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:11.860 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:11.860 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:11.860 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:11.860 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:11.860 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:11.860 rmmod nvme_tcp 00:15:11.860 rmmod nvme_fabrics 00:15:11.860 rmmod nvme_keyring 00:15:11.860 16:03:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72838 ']' 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72838 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 72838 ']' 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 72838 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72838 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:11.860 killing process with pid 72838 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72838' 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 72838 00:15:11.860 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 72838 00:15:12.118 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:12.118 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:12.118 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:12.118 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:12.118 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:12.118 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:12.118 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:12.118 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:12.118 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:12.118 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:12.118 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:12.119 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:12.119 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.119 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:12.119 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:12.119 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:12.119 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:12.119 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.yTNBZUXiSh /tmp/tmp.fawmtTv444 /tmp/tmp.DlhSamf6Nn 00:15:12.378 00:15:12.378 real 1m28.537s 00:15:12.378 user 2m25.307s 00:15:12.378 sys 0m27.482s 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.378 ************************************ 00:15:12.378 END TEST nvmf_tls 00:15:12.378 ************************************ 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:12.378 ************************************ 00:15:12.378 START TEST nvmf_fips 00:15:12.378 ************************************ 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:12.378 * Looking for test storage... 00:15:12.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:15:12.378 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:12.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.638 --rc genhtml_branch_coverage=1 00:15:12.638 --rc genhtml_function_coverage=1 00:15:12.638 --rc genhtml_legend=1 00:15:12.638 --rc geninfo_all_blocks=1 00:15:12.638 --rc geninfo_unexecuted_blocks=1 00:15:12.638 00:15:12.638 ' 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:12.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.638 --rc genhtml_branch_coverage=1 00:15:12.638 --rc genhtml_function_coverage=1 00:15:12.638 --rc genhtml_legend=1 00:15:12.638 --rc geninfo_all_blocks=1 00:15:12.638 --rc geninfo_unexecuted_blocks=1 00:15:12.638 00:15:12.638 ' 00:15:12.638 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:12.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.638 --rc genhtml_branch_coverage=1 00:15:12.638 --rc genhtml_function_coverage=1 00:15:12.638 --rc genhtml_legend=1 00:15:12.638 --rc geninfo_all_blocks=1 00:15:12.638 --rc geninfo_unexecuted_blocks=1 00:15:12.638 00:15:12.638 ' 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:12.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:12.639 --rc genhtml_branch_coverage=1 00:15:12.639 --rc genhtml_function_coverage=1 00:15:12.639 --rc genhtml_legend=1 00:15:12.639 --rc geninfo_all_blocks=1 00:15:12.639 --rc geninfo_unexecuted_blocks=1 00:15:12.639 00:15:12.639 ' 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:12.639 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:12.639 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:15:12.640 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:15:12.899 Error setting digest 00:15:12.899 40C2168AE77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:12.899 40C2168AE77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:12.899 Cannot find device "nvmf_init_br" 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:12.899 Cannot find device "nvmf_init_br2" 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:12.899 Cannot find device "nvmf_tgt_br" 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:12.899 Cannot find device "nvmf_tgt_br2" 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:12.899 Cannot find device "nvmf_init_br" 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:12.899 Cannot find device "nvmf_init_br2" 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:12.899 Cannot find device "nvmf_tgt_br" 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:12.899 16:03:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:12.899 Cannot find device "nvmf_tgt_br2" 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:12.899 Cannot find device "nvmf_br" 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:12.899 Cannot find device "nvmf_init_if" 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:12.899 Cannot find device "nvmf_init_if2" 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:12.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:12.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:12.899 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:12.900 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:12.900 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:12.900 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:12.900 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:12.900 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:13.158 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:13.158 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:15:13.158 00:15:13.158 --- 10.0.0.3 ping statistics --- 00:15:13.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.158 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:13.158 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:13.158 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:13.158 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:15:13.159 00:15:13.159 --- 10.0.0.4 ping statistics --- 00:15:13.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.159 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:13.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:13.159 00:15:13.159 --- 10.0.0.1 ping statistics --- 00:15:13.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.159 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:13.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:13.159 00:15:13.159 --- 10.0.0.2 ping statistics --- 00:15:13.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.159 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=73194 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 73194 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73194 ']' 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.159 16:03:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:13.418 [2024-11-20 16:03:11.454288] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:13.418 [2024-11-20 16:03:11.454418] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.418 [2024-11-20 16:03:11.602146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.688 [2024-11-20 16:03:11.684327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.688 [2024-11-20 16:03:11.684410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.688 [2024-11-20 16:03:11.684440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.688 [2024-11-20 16:03:11.684456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.688 [2024-11-20 16:03:11.684470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.688 [2024-11-20 16:03:11.684975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.688 [2024-11-20 16:03:11.743033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.AzN 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.AzN 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.AzN 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.AzN 00:15:14.277 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.535 [2024-11-20 16:03:12.768685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.793 [2024-11-20 16:03:12.784629] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:14.793 [2024-11-20 16:03:12.784869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:14.793 malloc0 00:15:14.793 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:14.793 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=73230 00:15:14.793 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 73230 /var/tmp/bdevperf.sock 00:15:14.793 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 73230 ']' 00:15:14.793 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:14.793 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:14.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:14.793 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.793 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:14.793 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.793 16:03:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:14.793 [2024-11-20 16:03:12.942610] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:14.793 [2024-11-20 16:03:12.942716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73230 ] 00:15:15.052 [2024-11-20 16:03:13.089772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.052 [2024-11-20 16:03:13.152517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.052 [2024-11-20 16:03:13.206634] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:15.984 16:03:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.984 16:03:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:15.984 16:03:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.AzN 00:15:15.984 16:03:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:16.551 [2024-11-20 16:03:14.530840] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:16.551 TLSTESTn1 00:15:16.551 16:03:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:16.551 Running I/O for 10 seconds... 00:15:18.861 3965.00 IOPS, 15.49 MiB/s [2024-11-20T16:03:18.044Z] 4029.00 IOPS, 15.74 MiB/s [2024-11-20T16:03:18.978Z] 4059.00 IOPS, 15.86 MiB/s [2024-11-20T16:03:19.908Z] 4064.75 IOPS, 15.88 MiB/s [2024-11-20T16:03:20.841Z] 4077.40 IOPS, 15.93 MiB/s [2024-11-20T16:03:21.815Z] 4086.17 IOPS, 15.96 MiB/s [2024-11-20T16:03:22.748Z] 4090.43 IOPS, 15.98 MiB/s [2024-11-20T16:03:24.123Z] 4091.25 IOPS, 15.98 MiB/s [2024-11-20T16:03:25.056Z] 4094.44 IOPS, 15.99 MiB/s [2024-11-20T16:03:25.056Z] 4095.50 IOPS, 16.00 MiB/s 00:15:26.806 Latency(us) 00:15:26.806 [2024-11-20T16:03:25.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.806 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:26.806 Verification LBA range: start 0x0 length 0x2000 00:15:26.806 TLSTESTn1 : 10.02 4101.32 16.02 0.00 0.00 31151.35 5689.72 29193.31 00:15:26.806 [2024-11-20T16:03:25.056Z] =================================================================================================================== 00:15:26.806 [2024-11-20T16:03:25.056Z] Total : 4101.32 16.02 0.00 0.00 31151.35 5689.72 29193.31 00:15:26.806 { 00:15:26.806 "results": [ 00:15:26.806 { 00:15:26.806 "job": "TLSTESTn1", 00:15:26.806 "core_mask": "0x4", 00:15:26.806 "workload": "verify", 00:15:26.806 "status": "finished", 00:15:26.806 "verify_range": { 00:15:26.806 "start": 0, 00:15:26.806 "length": 8192 00:15:26.806 }, 00:15:26.806 "queue_depth": 128, 00:15:26.806 "io_size": 4096, 00:15:26.806 "runtime": 10.016053, 00:15:26.806 "iops": 4101.316157172891, 00:15:26.806 "mibps": 16.020766238956604, 00:15:26.806 "io_failed": 0, 00:15:26.806 "io_timeout": 0, 00:15:26.806 "avg_latency_us": 31151.353889910573, 00:15:26.806 "min_latency_us": 5689.716363636364, 00:15:26.806 "max_latency_us": 29193.30909090909 00:15:26.806 } 00:15:26.806 ], 00:15:26.806 "core_count": 1 00:15:26.806 } 00:15:26.806 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:26.806 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:26.806 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:15:26.806 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:15:26.806 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:26.806 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:26.807 nvmf_trace.0 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 73230 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73230 ']' 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73230 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73230 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:26.807 killing process with pid 73230 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73230' 00:15:26.807 Received shutdown signal, test time was about 10.000000 seconds 00:15:26.807 00:15:26.807 Latency(us) 00:15:26.807 [2024-11-20T16:03:25.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.807 [2024-11-20T16:03:25.057Z] =================================================================================================================== 00:15:26.807 [2024-11-20T16:03:25.057Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73230 00:15:26.807 16:03:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73230 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:27.125 rmmod nvme_tcp 00:15:27.125 rmmod nvme_fabrics 00:15:27.125 rmmod nvme_keyring 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 73194 ']' 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 73194 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 73194 ']' 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 73194 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73194 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73194' 00:15:27.125 killing process with pid 73194 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 73194 00:15:27.125 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 73194 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.403 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.AzN 00:15:27.663 ************************************ 00:15:27.663 END TEST nvmf_fips 00:15:27.663 ************************************ 00:15:27.663 00:15:27.663 real 0m15.127s 00:15:27.663 user 0m21.490s 00:15:27.663 sys 0m5.666s 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.663 ************************************ 00:15:27.663 START TEST nvmf_control_msg_list 00:15:27.663 ************************************ 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:27.663 * Looking for test storage... 00:15:27.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:27.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.663 --rc genhtml_branch_coverage=1 00:15:27.663 --rc genhtml_function_coverage=1 00:15:27.663 --rc genhtml_legend=1 00:15:27.663 --rc geninfo_all_blocks=1 00:15:27.663 --rc geninfo_unexecuted_blocks=1 00:15:27.663 00:15:27.663 ' 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:27.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.663 --rc genhtml_branch_coverage=1 00:15:27.663 --rc genhtml_function_coverage=1 00:15:27.663 --rc genhtml_legend=1 00:15:27.663 --rc geninfo_all_blocks=1 00:15:27.663 --rc geninfo_unexecuted_blocks=1 00:15:27.663 00:15:27.663 ' 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:27.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.663 --rc genhtml_branch_coverage=1 00:15:27.663 --rc genhtml_function_coverage=1 00:15:27.663 --rc genhtml_legend=1 00:15:27.663 --rc geninfo_all_blocks=1 00:15:27.663 --rc geninfo_unexecuted_blocks=1 00:15:27.663 00:15:27.663 ' 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:27.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.663 --rc genhtml_branch_coverage=1 00:15:27.663 --rc genhtml_function_coverage=1 00:15:27.663 --rc genhtml_legend=1 00:15:27.663 --rc geninfo_all_blocks=1 00:15:27.663 --rc geninfo_unexecuted_blocks=1 00:15:27.663 00:15:27.663 ' 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.663 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.664 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:27.664 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:27.923 Cannot find device "nvmf_init_br" 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:27.923 Cannot find device "nvmf_init_br2" 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:27.923 Cannot find device "nvmf_tgt_br" 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.923 Cannot find device "nvmf_tgt_br2" 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:27.923 Cannot find device "nvmf_init_br" 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:27.923 Cannot find device "nvmf_init_br2" 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:27.923 Cannot find device "nvmf_tgt_br" 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:27.923 Cannot find device "nvmf_tgt_br2" 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:27.923 16:03:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:27.923 Cannot find device "nvmf_br" 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:27.923 Cannot find device "nvmf_init_if" 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:27.923 Cannot find device "nvmf_init_if2" 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.923 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.923 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:27.923 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.924 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:27.924 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:27.924 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:27.924 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:27.924 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:27.924 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:27.924 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:28.182 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:28.182 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:15:28.182 00:15:28.182 --- 10.0.0.3 ping statistics --- 00:15:28.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.182 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:28.182 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:28.182 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:15:28.182 00:15:28.182 --- 10.0.0.4 ping statistics --- 00:15:28.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.182 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:28.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:28.182 00:15:28.182 --- 10.0.0.1 ping statistics --- 00:15:28.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.182 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:28.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:15:28.182 00:15:28.182 --- 10.0.0.2 ping statistics --- 00:15:28.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.182 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:28.182 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:28.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=73624 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 73624 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 73624 ']' 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.183 16:03:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:28.183 [2024-11-20 16:03:26.309038] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:28.183 [2024-11-20 16:03:26.309350] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.441 [2024-11-20 16:03:26.457188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.441 [2024-11-20 16:03:26.519618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.441 [2024-11-20 16:03:26.519903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.441 [2024-11-20 16:03:26.520110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.441 [2024-11-20 16:03:26.520301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.441 [2024-11-20 16:03:26.520413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.441 [2024-11-20 16:03:26.520949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.441 [2024-11-20 16:03:26.575453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:29.376 [2024-11-20 16:03:27.392497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:29.376 Malloc0 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:29.376 [2024-11-20 16:03:27.432303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73656 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73657 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73658 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:29.376 16:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73656 00:15:29.376 [2024-11-20 16:03:27.611173] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:29.376 [2024-11-20 16:03:27.611721] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:29.376 [2024-11-20 16:03:27.612215] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:30.750 Initializing NVMe Controllers 00:15:30.750 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:30.750 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:30.750 Initialization complete. Launching workers. 00:15:30.750 ======================================================== 00:15:30.750 Latency(us) 00:15:30.750 Device Information : IOPS MiB/s Average min max 00:15:30.750 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3246.97 12.68 307.66 235.17 546.22 00:15:30.750 ======================================================== 00:15:30.750 Total : 3246.97 12.68 307.66 235.17 546.22 00:15:30.750 00:15:30.750 Initializing NVMe Controllers 00:15:30.750 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:30.750 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:30.750 Initialization complete. Launching workers. 00:15:30.750 ======================================================== 00:15:30.750 Latency(us) 00:15:30.750 Device Information : IOPS MiB/s Average min max 00:15:30.750 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3246.00 12.68 307.72 220.22 543.76 00:15:30.750 ======================================================== 00:15:30.750 Total : 3246.00 12.68 307.72 220.22 543.76 00:15:30.750 00:15:30.750 Initializing NVMe Controllers 00:15:30.750 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:30.750 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:30.750 Initialization complete. Launching workers. 00:15:30.750 ======================================================== 00:15:30.750 Latency(us) 00:15:30.750 Device Information : IOPS MiB/s Average min max 00:15:30.750 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3236.00 12.64 308.62 224.44 657.10 00:15:30.750 ======================================================== 00:15:30.750 Total : 3236.00 12.64 308.62 224.44 657.10 00:15:30.750 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73657 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73658 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:30.750 rmmod nvme_tcp 00:15:30.750 rmmod nvme_fabrics 00:15:30.750 rmmod nvme_keyring 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 73624 ']' 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 73624 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 73624 ']' 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 73624 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73624 00:15:30.750 killing process with pid 73624 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73624' 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 73624 00:15:30.750 16:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 73624 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:31.009 00:15:31.009 real 0m3.536s 00:15:31.009 user 0m5.682s 00:15:31.009 sys 0m1.321s 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.009 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:31.009 ************************************ 00:15:31.009 END TEST nvmf_control_msg_list 00:15:31.009 ************************************ 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.267 ************************************ 00:15:31.267 START TEST nvmf_wait_for_buf 00:15:31.267 ************************************ 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:31.267 * Looking for test storage... 00:15:31.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.267 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:31.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.664 --rc genhtml_branch_coverage=1 00:15:31.664 --rc genhtml_function_coverage=1 00:15:31.664 --rc genhtml_legend=1 00:15:31.664 --rc geninfo_all_blocks=1 00:15:31.664 --rc geninfo_unexecuted_blocks=1 00:15:31.664 00:15:31.664 ' 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:31.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.664 --rc genhtml_branch_coverage=1 00:15:31.664 --rc genhtml_function_coverage=1 00:15:31.664 --rc genhtml_legend=1 00:15:31.664 --rc geninfo_all_blocks=1 00:15:31.664 --rc geninfo_unexecuted_blocks=1 00:15:31.664 00:15:31.664 ' 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:31.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.664 --rc genhtml_branch_coverage=1 00:15:31.664 --rc genhtml_function_coverage=1 00:15:31.664 --rc genhtml_legend=1 00:15:31.664 --rc geninfo_all_blocks=1 00:15:31.664 --rc geninfo_unexecuted_blocks=1 00:15:31.664 00:15:31.664 ' 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:31.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.664 --rc genhtml_branch_coverage=1 00:15:31.664 --rc genhtml_function_coverage=1 00:15:31.664 --rc genhtml_legend=1 00:15:31.664 --rc geninfo_all_blocks=1 00:15:31.664 --rc geninfo_unexecuted_blocks=1 00:15:31.664 00:15:31.664 ' 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.664 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.665 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:31.665 Cannot find device "nvmf_init_br" 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:31.665 Cannot find device "nvmf_init_br2" 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:31.665 Cannot find device "nvmf_tgt_br" 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.665 Cannot find device "nvmf_tgt_br2" 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:31.665 Cannot find device "nvmf_init_br" 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:31.665 Cannot find device "nvmf_init_br2" 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:31.665 Cannot find device "nvmf_tgt_br" 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:31.665 Cannot find device "nvmf_tgt_br2" 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:31.665 Cannot find device "nvmf_br" 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:31.665 Cannot find device "nvmf_init_if" 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:31.665 Cannot find device "nvmf_init_if2" 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.665 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.665 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:31.666 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:31.925 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:31.925 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:15:31.925 00:15:31.925 --- 10.0.0.3 ping statistics --- 00:15:31.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.925 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:31.925 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:31.925 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:15:31.925 00:15:31.925 --- 10.0.0.4 ping statistics --- 00:15:31.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.925 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:31.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:31.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:31.925 00:15:31.925 --- 10.0.0.1 ping statistics --- 00:15:31.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.925 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:31.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:31.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:15:31.925 00:15:31.925 --- 10.0.0.2 ping statistics --- 00:15:31.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:31.925 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:31.925 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:31.926 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73896 00:15:31.926 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:31.926 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73896 00:15:31.926 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 73896 ']' 00:15:31.926 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.926 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.926 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.926 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.926 16:03:29 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:31.926 [2024-11-20 16:03:29.999345] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:31.926 [2024-11-20 16:03:29.999454] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.926 [2024-11-20 16:03:30.153426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.184 [2024-11-20 16:03:30.223608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.184 [2024-11-20 16:03:30.224032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.184 [2024-11-20 16:03:30.224069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.184 [2024-11-20 16:03:30.224087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.184 [2024-11-20 16:03:30.224104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.184 [2024-11-20 16:03:30.224602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.184 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.184 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:15:32.184 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:32.184 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:32.185 [2024-11-20 16:03:30.350867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:32.185 Malloc0 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:32.185 [2024-11-20 16:03:30.423407] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.185 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:32.443 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.443 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:32.443 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.443 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:32.443 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.443 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:32.443 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.443 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:32.443 [2024-11-20 16:03:30.447471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:32.443 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.443 16:03:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:32.443 [2024-11-20 16:03:30.658993] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:33.819 Initializing NVMe Controllers 00:15:33.819 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:33.819 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:33.819 Initialization complete. Launching workers. 00:15:33.819 ======================================================== 00:15:33.819 Latency(us) 00:15:33.819 Device Information : IOPS MiB/s Average min max 00:15:33.819 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 504.00 63.00 7968.72 7044.77 8227.21 00:15:33.819 ======================================================== 00:15:33.819 Total : 504.00 63.00 7968.72 7044.77 8227.21 00:15:33.819 00:15:33.819 16:03:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:33.819 16:03:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.819 16:03:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:33.819 16:03:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:33.819 16:03:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.819 16:03:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4788 00:15:33.819 16:03:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4788 -eq 0 ]] 00:15:33.819 16:03:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:33.819 16:03:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:33.819 16:03:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:33.819 16:03:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:33.819 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:33.819 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:33.819 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:33.819 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:33.819 rmmod nvme_tcp 00:15:33.819 rmmod nvme_fabrics 00:15:34.077 rmmod nvme_keyring 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73896 ']' 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73896 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 73896 ']' 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 73896 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73896 00:15:34.077 killing process with pid 73896 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73896' 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 73896 00:15:34.077 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 73896 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.335 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:34.593 00:15:34.593 real 0m3.297s 00:15:34.593 user 0m2.641s 00:15:34.593 sys 0m0.808s 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:34.593 ************************************ 00:15:34.593 END TEST nvmf_wait_for_buf 00:15:34.593 ************************************ 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.593 ************************************ 00:15:34.593 START TEST nvmf_nsid 00:15:34.593 ************************************ 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:15:34.593 * Looking for test storage... 00:15:34.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.593 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:34.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.594 --rc genhtml_branch_coverage=1 00:15:34.594 --rc genhtml_function_coverage=1 00:15:34.594 --rc genhtml_legend=1 00:15:34.594 --rc geninfo_all_blocks=1 00:15:34.594 --rc geninfo_unexecuted_blocks=1 00:15:34.594 00:15:34.594 ' 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:34.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.594 --rc genhtml_branch_coverage=1 00:15:34.594 --rc genhtml_function_coverage=1 00:15:34.594 --rc genhtml_legend=1 00:15:34.594 --rc geninfo_all_blocks=1 00:15:34.594 --rc geninfo_unexecuted_blocks=1 00:15:34.594 00:15:34.594 ' 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:34.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.594 --rc genhtml_branch_coverage=1 00:15:34.594 --rc genhtml_function_coverage=1 00:15:34.594 --rc genhtml_legend=1 00:15:34.594 --rc geninfo_all_blocks=1 00:15:34.594 --rc geninfo_unexecuted_blocks=1 00:15:34.594 00:15:34.594 ' 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:34.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.594 --rc genhtml_branch_coverage=1 00:15:34.594 --rc genhtml_function_coverage=1 00:15:34.594 --rc genhtml_legend=1 00:15:34.594 --rc geninfo_all_blocks=1 00:15:34.594 --rc geninfo_unexecuted_blocks=1 00:15:34.594 00:15:34.594 ' 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.594 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.900 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:34.901 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:34.901 Cannot find device "nvmf_init_br" 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:34.901 Cannot find device "nvmf_init_br2" 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:34.901 Cannot find device "nvmf_tgt_br" 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.901 Cannot find device "nvmf_tgt_br2" 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:34.901 Cannot find device "nvmf_init_br" 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:34.901 Cannot find device "nvmf_init_br2" 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:34.901 Cannot find device "nvmf_tgt_br" 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:34.901 Cannot find device "nvmf_tgt_br2" 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:34.901 Cannot find device "nvmf_br" 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:34.901 Cannot find device "nvmf_init_if" 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:15:34.901 16:03:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:34.901 Cannot find device "nvmf_init_if2" 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:34.901 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:34.902 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:34.902 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:34.902 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:34.902 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:34.902 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:35.159 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:35.159 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:15:35.159 00:15:35.159 --- 10.0.0.3 ping statistics --- 00:15:35.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.159 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:15:35.159 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:35.159 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:35.159 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:15:35.159 00:15:35.160 --- 10.0.0.4 ping statistics --- 00:15:35.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.160 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:35.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:15:35.160 00:15:35.160 --- 10.0.0.1 ping statistics --- 00:15:35.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.160 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:35.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:15:35.160 00:15:35.160 --- 10.0.0.2 ping statistics --- 00:15:35.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.160 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:35.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=74154 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 74154 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74154 ']' 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.160 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:35.160 [2024-11-20 16:03:33.374517] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:35.160 [2024-11-20 16:03:33.374930] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.417 [2024-11-20 16:03:33.524314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.417 [2024-11-20 16:03:33.588372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.417 [2024-11-20 16:03:33.588695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.417 [2024-11-20 16:03:33.588913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.417 [2024-11-20 16:03:33.588983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.417 [2024-11-20 16:03:33.589099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.417 [2024-11-20 16:03:33.589576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.417 [2024-11-20 16:03:33.645552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=74183 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:15:35.675 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=0843504a-c05f-4708-a65f-b33a39f49e8c 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=30ee9277-e70d-42f9-8b25-b87ad43dc423 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=39b946cd-2073-437b-9af6-72aec1e4aaab 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:35.676 null0 00:15:35.676 null1 00:15:35.676 null2 00:15:35.676 [2024-11-20 16:03:33.807697] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.676 [2024-11-20 16:03:33.831945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:35.676 [2024-11-20 16:03:33.836834] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:35.676 [2024-11-20 16:03:33.836947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74183 ] 00:15:35.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 74183 /var/tmp/tgt2.sock 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 74183 ']' 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.676 16:03:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:35.934 [2024-11-20 16:03:33.994218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.934 [2024-11-20 16:03:34.066853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.934 [2024-11-20 16:03:34.149658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.191 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.191 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:36.191 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:15:36.759 [2024-11-20 16:03:34.764698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.759 [2024-11-20 16:03:34.780857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:15:36.759 nvme0n1 nvme0n2 00:15:36.759 nvme1n1 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid=ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:15:36.759 16:03:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:15:38.132 16:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:38.132 16:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:38.132 16:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:38.132 16:03:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 0843504a-c05f-4708-a65f-b33a39f49e8c 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0843504ac05f4708a65fb33a39f49e8c 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0843504AC05F4708A65FB33A39F49E8C 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 0843504AC05F4708A65FB33A39F49E8C == \0\8\4\3\5\0\4\A\C\0\5\F\4\7\0\8\A\6\5\F\B\3\3\A\3\9\F\4\9\E\8\C ]] 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 30ee9277-e70d-42f9-8b25-b87ad43dc423 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=30ee9277e70d42f98b25b87ad43dc423 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 30EE9277E70D42F98B25B87AD43DC423 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 30EE9277E70D42F98B25B87AD43DC423 == \3\0\E\E\9\2\7\7\E\7\0\D\4\2\F\9\8\B\2\5\B\8\7\A\D\4\3\D\C\4\2\3 ]] 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 39b946cd-2073-437b-9af6-72aec1e4aaab 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=39b946cd2073437b9af672aec1e4aaab 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 39B946CD2073437B9AF672AEC1E4AAAB 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 39B946CD2073437B9AF672AEC1E4AAAB == \3\9\B\9\4\6\C\D\2\0\7\3\4\3\7\B\9\A\F\6\7\2\A\E\C\1\E\4\A\A\A\B ]] 00:15:38.132 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 74183 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74183 ']' 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74183 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74183 00:15:38.391 killing process with pid 74183 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74183' 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74183 00:15:38.391 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74183 00:15:38.650 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:15:38.650 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:38.650 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:15:38.910 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:38.910 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:15:38.910 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:38.910 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:38.910 rmmod nvme_tcp 00:15:38.910 rmmod nvme_fabrics 00:15:38.910 rmmod nvme_keyring 00:15:38.910 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:38.910 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:15:38.910 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:15:38.910 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 74154 ']' 00:15:38.910 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 74154 00:15:38.910 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 74154 ']' 00:15:38.910 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 74154 00:15:38.911 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:15:38.911 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.911 16:03:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74154 00:15:38.911 killing process with pid 74154 00:15:38.911 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.911 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.911 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74154' 00:15:38.911 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 74154 00:15:38.911 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 74154 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:39.171 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:39.429 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.429 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.429 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:39.429 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.429 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.429 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.429 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:15:39.429 00:15:39.429 real 0m4.857s 00:15:39.429 user 0m7.088s 00:15:39.429 sys 0m1.709s 00:15:39.429 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.429 ************************************ 00:15:39.429 16:03:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:39.429 END TEST nvmf_nsid 00:15:39.429 ************************************ 00:15:39.429 16:03:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:39.429 ************************************ 00:15:39.429 END TEST nvmf_target_extra 00:15:39.429 00:15:39.429 real 5m17.071s 00:15:39.429 user 11m7.388s 00:15:39.429 sys 1m9.361s 00:15:39.429 16:03:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.429 16:03:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.429 ************************************ 00:15:39.429 16:03:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:39.429 16:03:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:39.429 16:03:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.429 16:03:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:39.429 ************************************ 00:15:39.429 START TEST nvmf_host 00:15:39.429 ************************************ 00:15:39.429 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:15:39.429 * Looking for test storage... 00:15:39.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:39.429 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:39.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.688 --rc genhtml_branch_coverage=1 00:15:39.688 --rc genhtml_function_coverage=1 00:15:39.688 --rc genhtml_legend=1 00:15:39.688 --rc geninfo_all_blocks=1 00:15:39.688 --rc geninfo_unexecuted_blocks=1 00:15:39.688 00:15:39.688 ' 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:39.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.688 --rc genhtml_branch_coverage=1 00:15:39.688 --rc genhtml_function_coverage=1 00:15:39.688 --rc genhtml_legend=1 00:15:39.688 --rc geninfo_all_blocks=1 00:15:39.688 --rc geninfo_unexecuted_blocks=1 00:15:39.688 00:15:39.688 ' 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:39.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.688 --rc genhtml_branch_coverage=1 00:15:39.688 --rc genhtml_function_coverage=1 00:15:39.688 --rc genhtml_legend=1 00:15:39.688 --rc geninfo_all_blocks=1 00:15:39.688 --rc geninfo_unexecuted_blocks=1 00:15:39.688 00:15:39.688 ' 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:39.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.688 --rc genhtml_branch_coverage=1 00:15:39.688 --rc genhtml_function_coverage=1 00:15:39.688 --rc genhtml_legend=1 00:15:39.688 --rc geninfo_all_blocks=1 00:15:39.688 --rc geninfo_unexecuted_blocks=1 00:15:39.688 00:15:39.688 ' 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.688 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.689 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:39.689 ************************************ 00:15:39.689 START TEST nvmf_identify 00:15:39.689 ************************************ 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:39.689 * Looking for test storage... 00:15:39.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:39.689 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.948 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:15:39.949 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:15:39.949 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.949 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.949 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:15:39.949 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:15:39.949 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.949 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:15:39.949 16:03:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:39.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.949 --rc genhtml_branch_coverage=1 00:15:39.949 --rc genhtml_function_coverage=1 00:15:39.949 --rc genhtml_legend=1 00:15:39.949 --rc geninfo_all_blocks=1 00:15:39.949 --rc geninfo_unexecuted_blocks=1 00:15:39.949 00:15:39.949 ' 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:39.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.949 --rc genhtml_branch_coverage=1 00:15:39.949 --rc genhtml_function_coverage=1 00:15:39.949 --rc genhtml_legend=1 00:15:39.949 --rc geninfo_all_blocks=1 00:15:39.949 --rc geninfo_unexecuted_blocks=1 00:15:39.949 00:15:39.949 ' 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:39.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.949 --rc genhtml_branch_coverage=1 00:15:39.949 --rc genhtml_function_coverage=1 00:15:39.949 --rc genhtml_legend=1 00:15:39.949 --rc geninfo_all_blocks=1 00:15:39.949 --rc geninfo_unexecuted_blocks=1 00:15:39.949 00:15:39.949 ' 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:39.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.949 --rc genhtml_branch_coverage=1 00:15:39.949 --rc genhtml_function_coverage=1 00:15:39.949 --rc genhtml_legend=1 00:15:39.949 --rc geninfo_all_blocks=1 00:15:39.949 --rc geninfo_unexecuted_blocks=1 00:15:39.949 00:15:39.949 ' 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:39.949 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:39.949 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:39.950 Cannot find device "nvmf_init_br" 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:39.950 Cannot find device "nvmf_init_br2" 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:39.950 Cannot find device "nvmf_tgt_br" 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:39.950 Cannot find device "nvmf_tgt_br2" 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:39.950 Cannot find device "nvmf_init_br" 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:39.950 Cannot find device "nvmf_init_br2" 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:39.950 Cannot find device "nvmf_tgt_br" 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:39.950 Cannot find device "nvmf_tgt_br2" 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:39.950 Cannot find device "nvmf_br" 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:39.950 Cannot find device "nvmf_init_if" 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:39.950 Cannot find device "nvmf_init_if2" 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:39.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:39.950 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:39.950 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:40.209 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:40.210 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:40.210 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:15:40.210 00:15:40.210 --- 10.0.0.3 ping statistics --- 00:15:40.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.210 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:40.210 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:40.210 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:15:40.210 00:15:40.210 --- 10.0.0.4 ping statistics --- 00:15:40.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.210 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:40.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:40.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:40.210 00:15:40.210 --- 10.0.0.1 ping statistics --- 00:15:40.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.210 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:40.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:40.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:15:40.210 00:15:40.210 --- 10.0.0.2 ping statistics --- 00:15:40.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:40.210 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:40.210 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74535 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74535 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 74535 ']' 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.468 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.468 [2024-11-20 16:03:38.535295] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:40.468 [2024-11-20 16:03:38.535391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.468 [2024-11-20 16:03:38.686745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:40.727 [2024-11-20 16:03:38.749279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.727 [2024-11-20 16:03:38.749343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.727 [2024-11-20 16:03:38.749372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.727 [2024-11-20 16:03:38.749381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.727 [2024-11-20 16:03:38.749388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.727 [2024-11-20 16:03:38.750888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.727 [2024-11-20 16:03:38.750932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.727 [2024-11-20 16:03:38.751017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:40.727 [2024-11-20 16:03:38.751019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.727 [2024-11-20 16:03:38.808867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:40.727 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.727 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:15:40.727 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:40.727 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.727 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.727 [2024-11-20 16:03:38.890977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.727 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.727 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:40.727 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:40.727 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.727 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:40.727 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.727 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.985 Malloc0 00:15:40.985 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.985 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:40.985 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.985 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.985 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.985 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:40.985 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.985 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.985 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.985 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:40.985 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.985 16:03:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.986 [2024-11-20 16:03:38.999968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:40.986 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.986 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:40.986 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.986 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.986 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.986 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:40.986 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.986 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:40.986 [ 00:15:40.986 { 00:15:40.986 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:40.986 "subtype": "Discovery", 00:15:40.986 "listen_addresses": [ 00:15:40.986 { 00:15:40.986 "trtype": "TCP", 00:15:40.986 "adrfam": "IPv4", 00:15:40.986 "traddr": "10.0.0.3", 00:15:40.986 "trsvcid": "4420" 00:15:40.986 } 00:15:40.986 ], 00:15:40.986 "allow_any_host": true, 00:15:40.986 "hosts": [] 00:15:40.986 }, 00:15:40.986 { 00:15:40.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:40.986 "subtype": "NVMe", 00:15:40.986 "listen_addresses": [ 00:15:40.986 { 00:15:40.986 "trtype": "TCP", 00:15:40.986 "adrfam": "IPv4", 00:15:40.986 "traddr": "10.0.0.3", 00:15:40.986 "trsvcid": "4420" 00:15:40.986 } 00:15:40.986 ], 00:15:40.986 "allow_any_host": true, 00:15:40.986 "hosts": [], 00:15:40.986 "serial_number": "SPDK00000000000001", 00:15:40.986 "model_number": "SPDK bdev Controller", 00:15:40.986 "max_namespaces": 32, 00:15:40.986 "min_cntlid": 1, 00:15:40.986 "max_cntlid": 65519, 00:15:40.986 "namespaces": [ 00:15:40.986 { 00:15:40.986 "nsid": 1, 00:15:40.986 "bdev_name": "Malloc0", 00:15:40.986 "name": "Malloc0", 00:15:40.986 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:40.986 "eui64": "ABCDEF0123456789", 00:15:40.986 "uuid": "584f0bf8-ddc8-4e86-a834-7aa6b62a0c2f" 00:15:40.986 } 00:15:40.986 ] 00:15:40.986 } 00:15:40.986 ] 00:15:40.986 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.986 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:40.986 [2024-11-20 16:03:39.062786] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:40.986 [2024-11-20 16:03:39.062892] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74564 ] 00:15:41.245 [2024-11-20 16:03:39.237020] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:15:41.245 [2024-11-20 16:03:39.237095] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:41.245 [2024-11-20 16:03:39.237103] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:41.245 [2024-11-20 16:03:39.237122] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:41.245 [2024-11-20 16:03:39.237133] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:41.245 [2024-11-20 16:03:39.237514] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:15:41.245 [2024-11-20 16:03:39.237591] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1760750 0 00:15:41.245 [2024-11-20 16:03:39.244839] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:41.245 [2024-11-20 16:03:39.244869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:41.245 [2024-11-20 16:03:39.244876] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:41.245 [2024-11-20 16:03:39.244879] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:41.245 [2024-11-20 16:03:39.244912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.244920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.244924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1760750) 00:15:41.245 [2024-11-20 16:03:39.244951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:41.245 [2024-11-20 16:03:39.244983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4740, cid 0, qid 0 00:15:41.245 [2024-11-20 16:03:39.252830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.245 [2024-11-20 16:03:39.252855] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.245 [2024-11-20 16:03:39.252861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.252867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4740) on tqpair=0x1760750 00:15:41.245 [2024-11-20 16:03:39.252882] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:41.245 [2024-11-20 16:03:39.252892] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:15:41.245 [2024-11-20 16:03:39.252898] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:15:41.245 [2024-11-20 16:03:39.252917] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.252923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.252927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1760750) 00:15:41.245 [2024-11-20 16:03:39.252938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.245 [2024-11-20 16:03:39.252967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4740, cid 0, qid 0 00:15:41.245 [2024-11-20 16:03:39.253040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.245 [2024-11-20 16:03:39.253048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.245 [2024-11-20 16:03:39.253051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4740) on tqpair=0x1760750 00:15:41.245 [2024-11-20 16:03:39.253062] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:15:41.245 [2024-11-20 16:03:39.253070] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:15:41.245 [2024-11-20 16:03:39.253079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1760750) 00:15:41.245 [2024-11-20 16:03:39.253096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.245 [2024-11-20 16:03:39.253117] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4740, cid 0, qid 0 00:15:41.245 [2024-11-20 16:03:39.253167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.245 [2024-11-20 16:03:39.253185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.245 [2024-11-20 16:03:39.253190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4740) on tqpair=0x1760750 00:15:41.245 [2024-11-20 16:03:39.253201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:15:41.245 [2024-11-20 16:03:39.253211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:41.245 [2024-11-20 16:03:39.253219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1760750) 00:15:41.245 [2024-11-20 16:03:39.253237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.245 [2024-11-20 16:03:39.253257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4740, cid 0, qid 0 00:15:41.245 [2024-11-20 16:03:39.253305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.245 [2024-11-20 16:03:39.253313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.245 [2024-11-20 16:03:39.253316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4740) on tqpair=0x1760750 00:15:41.245 [2024-11-20 16:03:39.253327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:41.245 [2024-11-20 16:03:39.253338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1760750) 00:15:41.245 [2024-11-20 16:03:39.253356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.245 [2024-11-20 16:03:39.253374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4740, cid 0, qid 0 00:15:41.245 [2024-11-20 16:03:39.253431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.245 [2024-11-20 16:03:39.253438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.245 [2024-11-20 16:03:39.253442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4740) on tqpair=0x1760750 00:15:41.245 [2024-11-20 16:03:39.253452] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:41.245 [2024-11-20 16:03:39.253457] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:41.245 [2024-11-20 16:03:39.253466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:41.245 [2024-11-20 16:03:39.253578] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:15:41.245 [2024-11-20 16:03:39.253584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:41.245 [2024-11-20 16:03:39.253595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1760750) 00:15:41.245 [2024-11-20 16:03:39.253612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.245 [2024-11-20 16:03:39.253633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4740, cid 0, qid 0 00:15:41.245 [2024-11-20 16:03:39.253677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.245 [2024-11-20 16:03:39.253685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.245 [2024-11-20 16:03:39.253688] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4740) on tqpair=0x1760750 00:15:41.245 [2024-11-20 16:03:39.253698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:41.245 [2024-11-20 16:03:39.253709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253714] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1760750) 00:15:41.245 [2024-11-20 16:03:39.253726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.245 [2024-11-20 16:03:39.253745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4740, cid 0, qid 0 00:15:41.245 [2024-11-20 16:03:39.253790] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.245 [2024-11-20 16:03:39.253797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.245 [2024-11-20 16:03:39.253800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.245 [2024-11-20 16:03:39.253805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4740) on tqpair=0x1760750 00:15:41.245 [2024-11-20 16:03:39.253824] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:41.246 [2024-11-20 16:03:39.253832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:41.246 [2024-11-20 16:03:39.253842] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:15:41.246 [2024-11-20 16:03:39.253858] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:41.246 [2024-11-20 16:03:39.253870] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.253875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1760750) 00:15:41.246 [2024-11-20 16:03:39.253884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.246 [2024-11-20 16:03:39.253906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4740, cid 0, qid 0 00:15:41.246 [2024-11-20 16:03:39.254001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.246 [2024-11-20 16:03:39.254008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.246 [2024-11-20 16:03:39.254013] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254017] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1760750): datao=0, datal=4096, cccid=0 00:15:41.246 [2024-11-20 16:03:39.254022] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17c4740) on tqpair(0x1760750): expected_datao=0, payload_size=4096 00:15:41.246 [2024-11-20 16:03:39.254027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254036] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254041] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.246 [2024-11-20 16:03:39.254056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.246 [2024-11-20 16:03:39.254060] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4740) on tqpair=0x1760750 00:15:41.246 [2024-11-20 16:03:39.254074] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:15:41.246 [2024-11-20 16:03:39.254079] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:15:41.246 [2024-11-20 16:03:39.254084] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:15:41.246 [2024-11-20 16:03:39.254090] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:15:41.246 [2024-11-20 16:03:39.254095] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:15:41.246 [2024-11-20 16:03:39.254100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:15:41.246 [2024-11-20 16:03:39.254114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:41.246 [2024-11-20 16:03:39.254123] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254128] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1760750) 00:15:41.246 [2024-11-20 16:03:39.254141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:41.246 [2024-11-20 16:03:39.254162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4740, cid 0, qid 0 00:15:41.246 [2024-11-20 16:03:39.254216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.246 [2024-11-20 16:03:39.254223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.246 [2024-11-20 16:03:39.254227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4740) on tqpair=0x1760750 00:15:41.246 [2024-11-20 16:03:39.254240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1760750) 00:15:41.246 [2024-11-20 16:03:39.254255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.246 [2024-11-20 16:03:39.254262] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1760750) 00:15:41.246 [2024-11-20 16:03:39.254276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.246 [2024-11-20 16:03:39.254293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1760750) 00:15:41.246 [2024-11-20 16:03:39.254307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.246 [2024-11-20 16:03:39.254313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.246 [2024-11-20 16:03:39.254327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.246 [2024-11-20 16:03:39.254333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:41.246 [2024-11-20 16:03:39.254347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:41.246 [2024-11-20 16:03:39.254355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1760750) 00:15:41.246 [2024-11-20 16:03:39.254367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.246 [2024-11-20 16:03:39.254388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4740, cid 0, qid 0 00:15:41.246 [2024-11-20 16:03:39.254396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c48c0, cid 1, qid 0 00:15:41.246 [2024-11-20 16:03:39.254401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4a40, cid 2, qid 0 00:15:41.246 [2024-11-20 16:03:39.254406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.246 [2024-11-20 16:03:39.254411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4d40, cid 4, qid 0 00:15:41.246 [2024-11-20 16:03:39.254504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.246 [2024-11-20 16:03:39.254511] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.246 [2024-11-20 16:03:39.254515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4d40) on tqpair=0x1760750 00:15:41.246 [2024-11-20 16:03:39.254525] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:15:41.246 [2024-11-20 16:03:39.254531] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:15:41.246 [2024-11-20 16:03:39.254543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1760750) 00:15:41.246 [2024-11-20 16:03:39.254556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.246 [2024-11-20 16:03:39.254576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4d40, cid 4, qid 0 00:15:41.246 [2024-11-20 16:03:39.254636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.246 [2024-11-20 16:03:39.254643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.246 [2024-11-20 16:03:39.254647] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254651] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1760750): datao=0, datal=4096, cccid=4 00:15:41.246 [2024-11-20 16:03:39.254656] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17c4d40) on tqpair(0x1760750): expected_datao=0, payload_size=4096 00:15:41.246 [2024-11-20 16:03:39.254661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254669] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254673] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.246 [2024-11-20 16:03:39.254688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.246 [2024-11-20 16:03:39.254692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4d40) on tqpair=0x1760750 00:15:41.246 [2024-11-20 16:03:39.254709] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:15:41.246 [2024-11-20 16:03:39.254744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1760750) 00:15:41.246 [2024-11-20 16:03:39.254759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.246 [2024-11-20 16:03:39.254767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1760750) 00:15:41.246 [2024-11-20 16:03:39.254781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.246 [2024-11-20 16:03:39.254807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4d40, cid 4, qid 0 00:15:41.246 [2024-11-20 16:03:39.254831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4ec0, cid 5, qid 0 00:15:41.246 [2024-11-20 16:03:39.254941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.246 [2024-11-20 16:03:39.254949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.246 [2024-11-20 16:03:39.254953] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254957] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1760750): datao=0, datal=1024, cccid=4 00:15:41.246 [2024-11-20 16:03:39.254961] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17c4d40) on tqpair(0x1760750): expected_datao=0, payload_size=1024 00:15:41.246 [2024-11-20 16:03:39.254966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254974] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254978] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.246 [2024-11-20 16:03:39.254990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.246 [2024-11-20 16:03:39.254994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.254998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4ec0) on tqpair=0x1760750 00:15:41.246 [2024-11-20 16:03:39.255017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.246 [2024-11-20 16:03:39.255025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.246 [2024-11-20 16:03:39.255029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.255033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4d40) on tqpair=0x1760750 00:15:41.246 [2024-11-20 16:03:39.255047] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.255052] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1760750) 00:15:41.246 [2024-11-20 16:03:39.255060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.246 [2024-11-20 16:03:39.255085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4d40, cid 4, qid 0 00:15:41.246 [2024-11-20 16:03:39.255171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.246 [2024-11-20 16:03:39.255179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.246 [2024-11-20 16:03:39.255183] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.255187] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1760750): datao=0, datal=3072, cccid=4 00:15:41.246 [2024-11-20 16:03:39.255192] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17c4d40) on tqpair(0x1760750): expected_datao=0, payload_size=3072 00:15:41.246 [2024-11-20 16:03:39.255197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.255204] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.255208] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.255217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.246 [2024-11-20 16:03:39.255223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.246 [2024-11-20 16:03:39.255227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.255231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4d40) on tqpair=0x1760750 00:15:41.246 [2024-11-20 16:03:39.255242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.255247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1760750) 00:15:41.246 [2024-11-20 16:03:39.255255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.246 [2024-11-20 16:03:39.255280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4d40, cid 4, qid 0 00:15:41.246 [2024-11-20 16:03:39.255348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.246 [2024-11-20 16:03:39.255355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.246 [2024-11-20 16:03:39.255359] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.246 [2024-11-20 16:03:39.255363] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1760750): datao=0, datal=8, cccid=4 00:15:41.247 [2024-11-20 16:03:39.255368] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17c4d40) on tqpair(0x1760750): expected_datao=0, payload_size=8 00:15:41.247 [2024-11-20 16:03:39.255372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.247 [2024-11-20 16:03:39.255380] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.247 [2024-11-20 16:03:39.255384] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.247 [2024-11-20 16:03:39.255399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.247 [2024-11-20 16:03:39.255406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.247 [2024-11-20 16:03:39.255410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.247 [2024-11-20 16:03:39.255414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4d40) on tqpair=0x1760750 00:15:41.247 ===================================================== 00:15:41.247 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:41.247 ===================================================== 00:15:41.247 Controller Capabilities/Features 00:15:41.247 ================================ 00:15:41.247 Vendor ID: 0000 00:15:41.247 Subsystem Vendor ID: 0000 00:15:41.247 Serial Number: .................... 00:15:41.247 Model Number: ........................................ 00:15:41.247 Firmware Version: 25.01 00:15:41.247 Recommended Arb Burst: 0 00:15:41.247 IEEE OUI Identifier: 00 00 00 00:15:41.247 Multi-path I/O 00:15:41.247 May have multiple subsystem ports: No 00:15:41.247 May have multiple controllers: No 00:15:41.247 Associated with SR-IOV VF: No 00:15:41.247 Max Data Transfer Size: 131072 00:15:41.247 Max Number of Namespaces: 0 00:15:41.247 Max Number of I/O Queues: 1024 00:15:41.247 NVMe Specification Version (VS): 1.3 00:15:41.247 NVMe Specification Version (Identify): 1.3 00:15:41.247 Maximum Queue Entries: 128 00:15:41.247 Contiguous Queues Required: Yes 00:15:41.247 Arbitration Mechanisms Supported 00:15:41.247 Weighted Round Robin: Not Supported 00:15:41.247 Vendor Specific: Not Supported 00:15:41.247 Reset Timeout: 15000 ms 00:15:41.247 Doorbell Stride: 4 bytes 00:15:41.247 NVM Subsystem Reset: Not Supported 00:15:41.247 Command Sets Supported 00:15:41.247 NVM Command Set: Supported 00:15:41.247 Boot Partition: Not Supported 00:15:41.247 Memory Page Size Minimum: 4096 bytes 00:15:41.247 Memory Page Size Maximum: 4096 bytes 00:15:41.247 Persistent Memory Region: Not Supported 00:15:41.247 Optional Asynchronous Events Supported 00:15:41.247 Namespace Attribute Notices: Not Supported 00:15:41.247 Firmware Activation Notices: Not Supported 00:15:41.247 ANA Change Notices: Not Supported 00:15:41.247 PLE Aggregate Log Change Notices: Not Supported 00:15:41.247 LBA Status Info Alert Notices: Not Supported 00:15:41.247 EGE Aggregate Log Change Notices: Not Supported 00:15:41.247 Normal NVM Subsystem Shutdown event: Not Supported 00:15:41.247 Zone Descriptor Change Notices: Not Supported 00:15:41.247 Discovery Log Change Notices: Supported 00:15:41.247 Controller Attributes 00:15:41.247 128-bit Host Identifier: Not Supported 00:15:41.247 Non-Operational Permissive Mode: Not Supported 00:15:41.247 NVM Sets: Not Supported 00:15:41.247 Read Recovery Levels: Not Supported 00:15:41.247 Endurance Groups: Not Supported 00:15:41.247 Predictable Latency Mode: Not Supported 00:15:41.247 Traffic Based Keep ALive: Not Supported 00:15:41.247 Namespace Granularity: Not Supported 00:15:41.247 SQ Associations: Not Supported 00:15:41.247 UUID List: Not Supported 00:15:41.247 Multi-Domain Subsystem: Not Supported 00:15:41.247 Fixed Capacity Management: Not Supported 00:15:41.247 Variable Capacity Management: Not Supported 00:15:41.247 Delete Endurance Group: Not Supported 00:15:41.247 Delete NVM Set: Not Supported 00:15:41.247 Extended LBA Formats Supported: Not Supported 00:15:41.247 Flexible Data Placement Supported: Not Supported 00:15:41.247 00:15:41.247 Controller Memory Buffer Support 00:15:41.247 ================================ 00:15:41.247 Supported: No 00:15:41.247 00:15:41.247 Persistent Memory Region Support 00:15:41.247 ================================ 00:15:41.247 Supported: No 00:15:41.247 00:15:41.247 Admin Command Set Attributes 00:15:41.247 ============================ 00:15:41.247 Security Send/Receive: Not Supported 00:15:41.247 Format NVM: Not Supported 00:15:41.247 Firmware Activate/Download: Not Supported 00:15:41.247 Namespace Management: Not Supported 00:15:41.247 Device Self-Test: Not Supported 00:15:41.247 Directives: Not Supported 00:15:41.247 NVMe-MI: Not Supported 00:15:41.247 Virtualization Management: Not Supported 00:15:41.247 Doorbell Buffer Config: Not Supported 00:15:41.247 Get LBA Status Capability: Not Supported 00:15:41.247 Command & Feature Lockdown Capability: Not Supported 00:15:41.247 Abort Command Limit: 1 00:15:41.247 Async Event Request Limit: 4 00:15:41.247 Number of Firmware Slots: N/A 00:15:41.247 Firmware Slot 1 Read-Only: N/A 00:15:41.247 Firmware Activation Without Reset: N/A 00:15:41.247 Multiple Update Detection Support: N/A 00:15:41.247 Firmware Update Granularity: No Information Provided 00:15:41.247 Per-Namespace SMART Log: No 00:15:41.247 Asymmetric Namespace Access Log Page: Not Supported 00:15:41.247 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:41.247 Command Effects Log Page: Not Supported 00:15:41.247 Get Log Page Extended Data: Supported 00:15:41.247 Telemetry Log Pages: Not Supported 00:15:41.247 Persistent Event Log Pages: Not Supported 00:15:41.247 Supported Log Pages Log Page: May Support 00:15:41.247 Commands Supported & Effects Log Page: Not Supported 00:15:41.247 Feature Identifiers & Effects Log Page:May Support 00:15:41.247 NVMe-MI Commands & Effects Log Page: May Support 00:15:41.247 Data Area 4 for Telemetry Log: Not Supported 00:15:41.247 Error Log Page Entries Supported: 128 00:15:41.247 Keep Alive: Not Supported 00:15:41.247 00:15:41.247 NVM Command Set Attributes 00:15:41.247 ========================== 00:15:41.247 Submission Queue Entry Size 00:15:41.247 Max: 1 00:15:41.247 Min: 1 00:15:41.247 Completion Queue Entry Size 00:15:41.247 Max: 1 00:15:41.247 Min: 1 00:15:41.247 Number of Namespaces: 0 00:15:41.247 Compare Command: Not Supported 00:15:41.247 Write Uncorrectable Command: Not Supported 00:15:41.247 Dataset Management Command: Not Supported 00:15:41.247 Write Zeroes Command: Not Supported 00:15:41.247 Set Features Save Field: Not Supported 00:15:41.247 Reservations: Not Supported 00:15:41.247 Timestamp: Not Supported 00:15:41.247 Copy: Not Supported 00:15:41.247 Volatile Write Cache: Not Present 00:15:41.247 Atomic Write Unit (Normal): 1 00:15:41.247 Atomic Write Unit (PFail): 1 00:15:41.247 Atomic Compare & Write Unit: 1 00:15:41.247 Fused Compare & Write: Supported 00:15:41.247 Scatter-Gather List 00:15:41.247 SGL Command Set: Supported 00:15:41.247 SGL Keyed: Supported 00:15:41.247 SGL Bit Bucket Descriptor: Not Supported 00:15:41.247 SGL Metadata Pointer: Not Supported 00:15:41.247 Oversized SGL: Not Supported 00:15:41.247 SGL Metadata Address: Not Supported 00:15:41.247 SGL Offset: Supported 00:15:41.247 Transport SGL Data Block: Not Supported 00:15:41.247 Replay Protected Memory Block: Not Supported 00:15:41.247 00:15:41.247 Firmware Slot Information 00:15:41.247 ========================= 00:15:41.247 Active slot: 0 00:15:41.247 00:15:41.247 00:15:41.247 Error Log 00:15:41.247 ========= 00:15:41.247 00:15:41.247 Active Namespaces 00:15:41.247 ================= 00:15:41.247 Discovery Log Page 00:15:41.247 ================== 00:15:41.247 Generation Counter: 2 00:15:41.247 Number of Records: 2 00:15:41.247 Record Format: 0 00:15:41.247 00:15:41.247 Discovery Log Entry 0 00:15:41.247 ---------------------- 00:15:41.247 Transport Type: 3 (TCP) 00:15:41.247 Address Family: 1 (IPv4) 00:15:41.247 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:41.247 Entry Flags: 00:15:41.247 Duplicate Returned Information: 1 00:15:41.247 Explicit Persistent Connection Support for Discovery: 1 00:15:41.247 Transport Requirements: 00:15:41.247 Secure Channel: Not Required 00:15:41.247 Port ID: 0 (0x0000) 00:15:41.247 Controller ID: 65535 (0xffff) 00:15:41.247 Admin Max SQ Size: 128 00:15:41.247 Transport Service Identifier: 4420 00:15:41.247 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:41.247 Transport Address: 10.0.0.3 00:15:41.247 Discovery Log Entry 1 00:15:41.247 ---------------------- 00:15:41.247 Transport Type: 3 (TCP) 00:15:41.247 Address Family: 1 (IPv4) 00:15:41.247 Subsystem Type: 2 (NVM Subsystem) 00:15:41.247 Entry Flags: 00:15:41.247 Duplicate Returned Information: 0 00:15:41.247 Explicit Persistent Connection Support for Discovery: 0 00:15:41.247 Transport Requirements: 00:15:41.247 Secure Channel: Not Required 00:15:41.247 Port ID: 0 (0x0000) 00:15:41.247 Controller ID: 65535 (0xffff) 00:15:41.247 Admin Max SQ Size: 128 00:15:41.247 Transport Service Identifier: 4420 00:15:41.247 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:41.247 Transport Address: 10.0.0.3 [2024-11-20 16:03:39.255533] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:15:41.247 [2024-11-20 16:03:39.255551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4740) on tqpair=0x1760750 00:15:41.247 [2024-11-20 16:03:39.255559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.247 [2024-11-20 16:03:39.255566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c48c0) on tqpair=0x1760750 00:15:41.247 [2024-11-20 16:03:39.255571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.247 [2024-11-20 16:03:39.255576] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4a40) on tqpair=0x1760750 00:15:41.247 [2024-11-20 16:03:39.255581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.247 [2024-11-20 16:03:39.255587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.247 [2024-11-20 16:03:39.255592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.247 [2024-11-20 16:03:39.255602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.247 [2024-11-20 16:03:39.255607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.247 [2024-11-20 16:03:39.255611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.247 [2024-11-20 16:03:39.255620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.247 [2024-11-20 16:03:39.255647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.247 [2024-11-20 16:03:39.255702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.247 [2024-11-20 16:03:39.255709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.247 [2024-11-20 16:03:39.255713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.247 [2024-11-20 16:03:39.255718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.247 [2024-11-20 16:03:39.255726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.247 [2024-11-20 16:03:39.255731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.247 [2024-11-20 16:03:39.255735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.247 [2024-11-20 16:03:39.255743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.247 [2024-11-20 16:03:39.255766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.247 [2024-11-20 16:03:39.255853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.247 [2024-11-20 16:03:39.255863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.247 [2024-11-20 16:03:39.255867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.247 [2024-11-20 16:03:39.255871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.247 [2024-11-20 16:03:39.255877] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:15:41.248 [2024-11-20 16:03:39.255886] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:15:41.248 [2024-11-20 16:03:39.255897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.255903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.255906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.248 [2024-11-20 16:03:39.255915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.248 [2024-11-20 16:03:39.255935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.248 [2024-11-20 16:03:39.255979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.248 [2024-11-20 16:03:39.255986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.248 [2024-11-20 16:03:39.255990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.255994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.248 [2024-11-20 16:03:39.256006] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.248 [2024-11-20 16:03:39.256023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.248 [2024-11-20 16:03:39.256042] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.248 [2024-11-20 16:03:39.256089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.248 [2024-11-20 16:03:39.256096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.248 [2024-11-20 16:03:39.256100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.248 [2024-11-20 16:03:39.256116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.248 [2024-11-20 16:03:39.256133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.248 [2024-11-20 16:03:39.256151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.248 [2024-11-20 16:03:39.256197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.248 [2024-11-20 16:03:39.256204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.248 [2024-11-20 16:03:39.256207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.248 [2024-11-20 16:03:39.256223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.248 [2024-11-20 16:03:39.256239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.248 [2024-11-20 16:03:39.256258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.248 [2024-11-20 16:03:39.256301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.248 [2024-11-20 16:03:39.256308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.248 [2024-11-20 16:03:39.256312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.248 [2024-11-20 16:03:39.256337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256342] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256346] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.248 [2024-11-20 16:03:39.256354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.248 [2024-11-20 16:03:39.256372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.248 [2024-11-20 16:03:39.256420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.248 [2024-11-20 16:03:39.256427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.248 [2024-11-20 16:03:39.256431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.248 [2024-11-20 16:03:39.256446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.248 [2024-11-20 16:03:39.256463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.248 [2024-11-20 16:03:39.256481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.248 [2024-11-20 16:03:39.256530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.248 [2024-11-20 16:03:39.256537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.248 [2024-11-20 16:03:39.256541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.248 [2024-11-20 16:03:39.256556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.248 [2024-11-20 16:03:39.256573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.248 [2024-11-20 16:03:39.256590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.248 [2024-11-20 16:03:39.256635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.248 [2024-11-20 16:03:39.256642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.248 [2024-11-20 16:03:39.256646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.248 [2024-11-20 16:03:39.256661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.248 [2024-11-20 16:03:39.256678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.248 [2024-11-20 16:03:39.256696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.248 [2024-11-20 16:03:39.256744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.248 [2024-11-20 16:03:39.256751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.248 [2024-11-20 16:03:39.256755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256759] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.248 [2024-11-20 16:03:39.256770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.256779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.248 [2024-11-20 16:03:39.256786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.248 [2024-11-20 16:03:39.256804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.248 [2024-11-20 16:03:39.260836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.248 [2024-11-20 16:03:39.260848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.248 [2024-11-20 16:03:39.260852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.260857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.248 [2024-11-20 16:03:39.260871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.260877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.260881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1760750) 00:15:41.248 [2024-11-20 16:03:39.260890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.248 [2024-11-20 16:03:39.260916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17c4bc0, cid 3, qid 0 00:15:41.248 [2024-11-20 16:03:39.260972] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.248 [2024-11-20 16:03:39.260980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.248 [2024-11-20 16:03:39.260984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.248 [2024-11-20 16:03:39.260988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17c4bc0) on tqpair=0x1760750 00:15:41.248 [2024-11-20 16:03:39.260997] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:15:41.248 00:15:41.248 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:41.248 [2024-11-20 16:03:39.308299] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:41.248 [2024-11-20 16:03:39.308564] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74566 ] 00:15:41.248 [2024-11-20 16:03:39.482125] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:15:41.248 [2024-11-20 16:03:39.482213] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:41.248 [2024-11-20 16:03:39.482221] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:41.248 [2024-11-20 16:03:39.482242] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:41.248 [2024-11-20 16:03:39.482254] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:41.248 [2024-11-20 16:03:39.482621] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:15:41.248 [2024-11-20 16:03:39.482697] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9cb750 0 00:15:41.511 [2024-11-20 16:03:39.496835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:41.511 [2024-11-20 16:03:39.496865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:41.511 [2024-11-20 16:03:39.496872] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:41.511 [2024-11-20 16:03:39.496876] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:41.511 [2024-11-20 16:03:39.496909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.496917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.496922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9cb750) 00:15:41.511 [2024-11-20 16:03:39.496938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:41.511 [2024-11-20 16:03:39.496971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2f740, cid 0, qid 0 00:15:41.511 [2024-11-20 16:03:39.503860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.511 [2024-11-20 16:03:39.503888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.511 [2024-11-20 16:03:39.503894] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.503900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2f740) on tqpair=0x9cb750 00:15:41.511 [2024-11-20 16:03:39.503912] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:41.511 [2024-11-20 16:03:39.503922] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:15:41.511 [2024-11-20 16:03:39.503929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:15:41.511 [2024-11-20 16:03:39.503947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.503953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.503957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9cb750) 00:15:41.511 [2024-11-20 16:03:39.503967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.511 [2024-11-20 16:03:39.503998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2f740, cid 0, qid 0 00:15:41.511 [2024-11-20 16:03:39.504051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.511 [2024-11-20 16:03:39.504059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.511 [2024-11-20 16:03:39.504063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2f740) on tqpair=0x9cb750 00:15:41.511 [2024-11-20 16:03:39.504074] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:15:41.511 [2024-11-20 16:03:39.504083] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:15:41.511 [2024-11-20 16:03:39.504092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504096] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9cb750) 00:15:41.511 [2024-11-20 16:03:39.504109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.511 [2024-11-20 16:03:39.504129] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2f740, cid 0, qid 0 00:15:41.511 [2024-11-20 16:03:39.504180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.511 [2024-11-20 16:03:39.504188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.511 [2024-11-20 16:03:39.504192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2f740) on tqpair=0x9cb750 00:15:41.511 [2024-11-20 16:03:39.504203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:15:41.511 [2024-11-20 16:03:39.504212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:15:41.511 [2024-11-20 16:03:39.504220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9cb750) 00:15:41.511 [2024-11-20 16:03:39.504237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.511 [2024-11-20 16:03:39.504255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2f740, cid 0, qid 0 00:15:41.511 [2024-11-20 16:03:39.504304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.511 [2024-11-20 16:03:39.504311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.511 [2024-11-20 16:03:39.504315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2f740) on tqpair=0x9cb750 00:15:41.511 [2024-11-20 16:03:39.504325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:41.511 [2024-11-20 16:03:39.504336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9cb750) 00:15:41.511 [2024-11-20 16:03:39.504353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.511 [2024-11-20 16:03:39.504371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2f740, cid 0, qid 0 00:15:41.511 [2024-11-20 16:03:39.504417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.511 [2024-11-20 16:03:39.504425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.511 [2024-11-20 16:03:39.504429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2f740) on tqpair=0x9cb750 00:15:41.511 [2024-11-20 16:03:39.504439] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:15:41.511 [2024-11-20 16:03:39.504444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:15:41.511 [2024-11-20 16:03:39.504453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:41.511 [2024-11-20 16:03:39.504565] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:15:41.511 [2024-11-20 16:03:39.504580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:41.511 [2024-11-20 16:03:39.504591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9cb750) 00:15:41.511 [2024-11-20 16:03:39.504608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.511 [2024-11-20 16:03:39.504631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2f740, cid 0, qid 0 00:15:41.511 [2024-11-20 16:03:39.504689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.511 [2024-11-20 16:03:39.504698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.511 [2024-11-20 16:03:39.504702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2f740) on tqpair=0x9cb750 00:15:41.511 [2024-11-20 16:03:39.504712] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:41.511 [2024-11-20 16:03:39.504723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9cb750) 00:15:41.511 [2024-11-20 16:03:39.504740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.511 [2024-11-20 16:03:39.504759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2f740, cid 0, qid 0 00:15:41.511 [2024-11-20 16:03:39.504801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.511 [2024-11-20 16:03:39.504808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.511 [2024-11-20 16:03:39.504826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2f740) on tqpair=0x9cb750 00:15:41.511 [2024-11-20 16:03:39.504837] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:41.511 [2024-11-20 16:03:39.504842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:15:41.511 [2024-11-20 16:03:39.504852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:15:41.511 [2024-11-20 16:03:39.504869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:15:41.511 [2024-11-20 16:03:39.504881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.504886] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9cb750) 00:15:41.511 [2024-11-20 16:03:39.504895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.511 [2024-11-20 16:03:39.504917] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2f740, cid 0, qid 0 00:15:41.511 [2024-11-20 16:03:39.505012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.511 [2024-11-20 16:03:39.505025] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.511 [2024-11-20 16:03:39.505030] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.505035] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9cb750): datao=0, datal=4096, cccid=0 00:15:41.511 [2024-11-20 16:03:39.505040] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa2f740) on tqpair(0x9cb750): expected_datao=0, payload_size=4096 00:15:41.511 [2024-11-20 16:03:39.505045] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.505055] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.505060] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.511 [2024-11-20 16:03:39.505069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.511 [2024-11-20 16:03:39.505076] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.512 [2024-11-20 16:03:39.505079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2f740) on tqpair=0x9cb750 00:15:41.512 [2024-11-20 16:03:39.505094] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:15:41.512 [2024-11-20 16:03:39.505100] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:15:41.512 [2024-11-20 16:03:39.505105] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:15:41.512 [2024-11-20 16:03:39.505110] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:15:41.512 [2024-11-20 16:03:39.505115] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:15:41.512 [2024-11-20 16:03:39.505121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:15:41.512 [2024-11-20 16:03:39.505136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:15:41.512 [2024-11-20 16:03:39.505145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505154] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9cb750) 00:15:41.512 [2024-11-20 16:03:39.505162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:41.512 [2024-11-20 16:03:39.505195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2f740, cid 0, qid 0 00:15:41.512 [2024-11-20 16:03:39.505246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.512 [2024-11-20 16:03:39.505253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.512 [2024-11-20 16:03:39.505257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505261] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2f740) on tqpair=0x9cb750 00:15:41.512 [2024-11-20 16:03:39.505270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9cb750) 00:15:41.512 [2024-11-20 16:03:39.505286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.512 [2024-11-20 16:03:39.505293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9cb750) 00:15:41.512 [2024-11-20 16:03:39.505307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.512 [2024-11-20 16:03:39.505315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9cb750) 00:15:41.512 [2024-11-20 16:03:39.505329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.512 [2024-11-20 16:03:39.505336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.512 [2024-11-20 16:03:39.505350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.512 [2024-11-20 16:03:39.505356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:41.512 [2024-11-20 16:03:39.505371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:41.512 [2024-11-20 16:03:39.505380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9cb750) 00:15:41.512 [2024-11-20 16:03:39.505392] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.512 [2024-11-20 16:03:39.505414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2f740, cid 0, qid 0 00:15:41.512 [2024-11-20 16:03:39.505422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2f8c0, cid 1, qid 0 00:15:41.512 [2024-11-20 16:03:39.505427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fa40, cid 2, qid 0 00:15:41.512 [2024-11-20 16:03:39.505432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.512 [2024-11-20 16:03:39.505438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fd40, cid 4, qid 0 00:15:41.512 [2024-11-20 16:03:39.505527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.512 [2024-11-20 16:03:39.505544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.512 [2024-11-20 16:03:39.505549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505553] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fd40) on tqpair=0x9cb750 00:15:41.512 [2024-11-20 16:03:39.505559] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:15:41.512 [2024-11-20 16:03:39.505565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:41.512 [2024-11-20 16:03:39.505575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:15:41.512 [2024-11-20 16:03:39.505586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:41.512 [2024-11-20 16:03:39.505595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9cb750) 00:15:41.512 [2024-11-20 16:03:39.505611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:41.512 [2024-11-20 16:03:39.505632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fd40, cid 4, qid 0 00:15:41.512 [2024-11-20 16:03:39.505685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.512 [2024-11-20 16:03:39.505692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.512 [2024-11-20 16:03:39.505696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fd40) on tqpair=0x9cb750 00:15:41.512 [2024-11-20 16:03:39.505769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:15:41.512 [2024-11-20 16:03:39.505781] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:41.512 [2024-11-20 16:03:39.505791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9cb750) 00:15:41.512 [2024-11-20 16:03:39.505804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.512 [2024-11-20 16:03:39.505841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fd40, cid 4, qid 0 00:15:41.512 [2024-11-20 16:03:39.505903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.512 [2024-11-20 16:03:39.505911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.512 [2024-11-20 16:03:39.505915] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505919] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9cb750): datao=0, datal=4096, cccid=4 00:15:41.512 [2024-11-20 16:03:39.505924] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa2fd40) on tqpair(0x9cb750): expected_datao=0, payload_size=4096 00:15:41.512 [2024-11-20 16:03:39.505928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505937] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505941] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.512 [2024-11-20 16:03:39.505956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.512 [2024-11-20 16:03:39.505960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.505964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fd40) on tqpair=0x9cb750 00:15:41.512 [2024-11-20 16:03:39.505981] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:15:41.512 [2024-11-20 16:03:39.505994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:15:41.512 [2024-11-20 16:03:39.506005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:15:41.512 [2024-11-20 16:03:39.506013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.506018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9cb750) 00:15:41.512 [2024-11-20 16:03:39.506026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.512 [2024-11-20 16:03:39.506047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fd40, cid 4, qid 0 00:15:41.512 [2024-11-20 16:03:39.506161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.512 [2024-11-20 16:03:39.506168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.512 [2024-11-20 16:03:39.506172] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.506176] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9cb750): datao=0, datal=4096, cccid=4 00:15:41.512 [2024-11-20 16:03:39.506181] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa2fd40) on tqpair(0x9cb750): expected_datao=0, payload_size=4096 00:15:41.512 [2024-11-20 16:03:39.506186] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.506194] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.506198] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.506207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.512 [2024-11-20 16:03:39.506213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.512 [2024-11-20 16:03:39.506217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.512 [2024-11-20 16:03:39.506221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fd40) on tqpair=0x9cb750 00:15:41.513 [2024-11-20 16:03:39.506242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:41.513 [2024-11-20 16:03:39.506255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:41.513 [2024-11-20 16:03:39.506264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9cb750) 00:15:41.513 [2024-11-20 16:03:39.506276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.513 [2024-11-20 16:03:39.506298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fd40, cid 4, qid 0 00:15:41.513 [2024-11-20 16:03:39.506359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.513 [2024-11-20 16:03:39.506366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.513 [2024-11-20 16:03:39.506370] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506374] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9cb750): datao=0, datal=4096, cccid=4 00:15:41.513 [2024-11-20 16:03:39.506379] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa2fd40) on tqpair(0x9cb750): expected_datao=0, payload_size=4096 00:15:41.513 [2024-11-20 16:03:39.506384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506392] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506396] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.513 [2024-11-20 16:03:39.506412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.513 [2024-11-20 16:03:39.506416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fd40) on tqpair=0x9cb750 00:15:41.513 [2024-11-20 16:03:39.506430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:41.513 [2024-11-20 16:03:39.506440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:15:41.513 [2024-11-20 16:03:39.506452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:15:41.513 [2024-11-20 16:03:39.506460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:41.513 [2024-11-20 16:03:39.506465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:41.513 [2024-11-20 16:03:39.506471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:15:41.513 [2024-11-20 16:03:39.506477] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:15:41.513 [2024-11-20 16:03:39.506482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:15:41.513 [2024-11-20 16:03:39.506488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:15:41.513 [2024-11-20 16:03:39.506507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9cb750) 00:15:41.513 [2024-11-20 16:03:39.506521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.513 [2024-11-20 16:03:39.506529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9cb750) 00:15:41.513 [2024-11-20 16:03:39.506545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.513 [2024-11-20 16:03:39.506571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fd40, cid 4, qid 0 00:15:41.513 [2024-11-20 16:03:39.506579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fec0, cid 5, qid 0 00:15:41.513 [2024-11-20 16:03:39.506643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.513 [2024-11-20 16:03:39.506650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.513 [2024-11-20 16:03:39.506654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fd40) on tqpair=0x9cb750 00:15:41.513 [2024-11-20 16:03:39.506666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.513 [2024-11-20 16:03:39.506672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.513 [2024-11-20 16:03:39.506676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fec0) on tqpair=0x9cb750 00:15:41.513 [2024-11-20 16:03:39.506691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506696] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9cb750) 00:15:41.513 [2024-11-20 16:03:39.506703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.513 [2024-11-20 16:03:39.506722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fec0, cid 5, qid 0 00:15:41.513 [2024-11-20 16:03:39.506769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.513 [2024-11-20 16:03:39.506776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.513 [2024-11-20 16:03:39.506780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fec0) on tqpair=0x9cb750 00:15:41.513 [2024-11-20 16:03:39.506795] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506800] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9cb750) 00:15:41.513 [2024-11-20 16:03:39.506807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.513 [2024-11-20 16:03:39.506849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fec0, cid 5, qid 0 00:15:41.513 [2024-11-20 16:03:39.506897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.513 [2024-11-20 16:03:39.506905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.513 [2024-11-20 16:03:39.506909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fec0) on tqpair=0x9cb750 00:15:41.513 [2024-11-20 16:03:39.506925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.506929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9cb750) 00:15:41.513 [2024-11-20 16:03:39.506937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.513 [2024-11-20 16:03:39.506956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fec0, cid 5, qid 0 00:15:41.513 [2024-11-20 16:03:39.506998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.513 [2024-11-20 16:03:39.507005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.513 [2024-11-20 16:03:39.507009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.507013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fec0) on tqpair=0x9cb750 00:15:41.513 [2024-11-20 16:03:39.507034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.507040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9cb750) 00:15:41.513 [2024-11-20 16:03:39.507048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.513 [2024-11-20 16:03:39.507056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.507061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9cb750) 00:15:41.513 [2024-11-20 16:03:39.507068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.513 [2024-11-20 16:03:39.507076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.507081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9cb750) 00:15:41.513 [2024-11-20 16:03:39.507088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.513 [2024-11-20 16:03:39.507097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.507101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9cb750) 00:15:41.513 [2024-11-20 16:03:39.507109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.513 [2024-11-20 16:03:39.507130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fec0, cid 5, qid 0 00:15:41.513 [2024-11-20 16:03:39.507138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fd40, cid 4, qid 0 00:15:41.513 [2024-11-20 16:03:39.507143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa30040, cid 6, qid 0 00:15:41.513 [2024-11-20 16:03:39.507148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa301c0, cid 7, qid 0 00:15:41.513 [2024-11-20 16:03:39.507288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.513 [2024-11-20 16:03:39.507304] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.513 [2024-11-20 16:03:39.507309] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.507313] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9cb750): datao=0, datal=8192, cccid=5 00:15:41.513 [2024-11-20 16:03:39.507318] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa2fec0) on tqpair(0x9cb750): expected_datao=0, payload_size=8192 00:15:41.513 [2024-11-20 16:03:39.507323] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.507343] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.507348] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.507354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.513 [2024-11-20 16:03:39.507360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.513 [2024-11-20 16:03:39.507364] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.507368] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9cb750): datao=0, datal=512, cccid=4 00:15:41.513 [2024-11-20 16:03:39.507373] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa2fd40) on tqpair(0x9cb750): expected_datao=0, payload_size=512 00:15:41.513 [2024-11-20 16:03:39.507378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.507385] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.513 [2024-11-20 16:03:39.507389] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.514 [2024-11-20 16:03:39.507401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.514 [2024-11-20 16:03:39.507404] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507408] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9cb750): datao=0, datal=512, cccid=6 00:15:41.514 [2024-11-20 16:03:39.507413] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa30040) on tqpair(0x9cb750): expected_datao=0, payload_size=512 00:15:41.514 [2024-11-20 16:03:39.507418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507424] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507428] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:41.514 [2024-11-20 16:03:39.507440] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:41.514 [2024-11-20 16:03:39.507443] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507447] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9cb750): datao=0, datal=4096, cccid=7 00:15:41.514 [2024-11-20 16:03:39.507452] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa301c0) on tqpair(0x9cb750): expected_datao=0, payload_size=4096 00:15:41.514 [2024-11-20 16:03:39.507457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507464] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507468] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.514 [2024-11-20 16:03:39.507479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.514 [2024-11-20 16:03:39.507484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fec0) on tqpair=0x9cb750 00:15:41.514 [2024-11-20 16:03:39.507506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.514 [2024-11-20 16:03:39.507514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.514 [2024-11-20 16:03:39.507517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fd40) on tqpair=0x9cb750 00:15:41.514 [2024-11-20 16:03:39.507535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.514 [2024-11-20 16:03:39.507542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.514 [2024-11-20 16:03:39.507545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507549] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa30040) on tqpair=0x9cb750 00:15:41.514 [2024-11-20 16:03:39.507557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.514 [2024-11-20 16:03:39.507563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.514 [2024-11-20 16:03:39.507567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.514 [2024-11-20 16:03:39.507571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa301c0) on tqpair=0x9cb750 00:15:41.514 ===================================================== 00:15:41.514 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:41.514 ===================================================== 00:15:41.514 Controller Capabilities/Features 00:15:41.514 ================================ 00:15:41.514 Vendor ID: 8086 00:15:41.514 Subsystem Vendor ID: 8086 00:15:41.514 Serial Number: SPDK00000000000001 00:15:41.514 Model Number: SPDK bdev Controller 00:15:41.514 Firmware Version: 25.01 00:15:41.514 Recommended Arb Burst: 6 00:15:41.514 IEEE OUI Identifier: e4 d2 5c 00:15:41.514 Multi-path I/O 00:15:41.514 May have multiple subsystem ports: Yes 00:15:41.514 May have multiple controllers: Yes 00:15:41.514 Associated with SR-IOV VF: No 00:15:41.514 Max Data Transfer Size: 131072 00:15:41.514 Max Number of Namespaces: 32 00:15:41.514 Max Number of I/O Queues: 127 00:15:41.514 NVMe Specification Version (VS): 1.3 00:15:41.514 NVMe Specification Version (Identify): 1.3 00:15:41.514 Maximum Queue Entries: 128 00:15:41.514 Contiguous Queues Required: Yes 00:15:41.514 Arbitration Mechanisms Supported 00:15:41.514 Weighted Round Robin: Not Supported 00:15:41.514 Vendor Specific: Not Supported 00:15:41.514 Reset Timeout: 15000 ms 00:15:41.514 Doorbell Stride: 4 bytes 00:15:41.514 NVM Subsystem Reset: Not Supported 00:15:41.514 Command Sets Supported 00:15:41.514 NVM Command Set: Supported 00:15:41.514 Boot Partition: Not Supported 00:15:41.514 Memory Page Size Minimum: 4096 bytes 00:15:41.514 Memory Page Size Maximum: 4096 bytes 00:15:41.514 Persistent Memory Region: Not Supported 00:15:41.514 Optional Asynchronous Events Supported 00:15:41.514 Namespace Attribute Notices: Supported 00:15:41.514 Firmware Activation Notices: Not Supported 00:15:41.514 ANA Change Notices: Not Supported 00:15:41.514 PLE Aggregate Log Change Notices: Not Supported 00:15:41.514 LBA Status Info Alert Notices: Not Supported 00:15:41.514 EGE Aggregate Log Change Notices: Not Supported 00:15:41.514 Normal NVM Subsystem Shutdown event: Not Supported 00:15:41.514 Zone Descriptor Change Notices: Not Supported 00:15:41.514 Discovery Log Change Notices: Not Supported 00:15:41.514 Controller Attributes 00:15:41.514 128-bit Host Identifier: Supported 00:15:41.514 Non-Operational Permissive Mode: Not Supported 00:15:41.514 NVM Sets: Not Supported 00:15:41.514 Read Recovery Levels: Not Supported 00:15:41.514 Endurance Groups: Not Supported 00:15:41.514 Predictable Latency Mode: Not Supported 00:15:41.514 Traffic Based Keep ALive: Not Supported 00:15:41.514 Namespace Granularity: Not Supported 00:15:41.514 SQ Associations: Not Supported 00:15:41.514 UUID List: Not Supported 00:15:41.514 Multi-Domain Subsystem: Not Supported 00:15:41.514 Fixed Capacity Management: Not Supported 00:15:41.514 Variable Capacity Management: Not Supported 00:15:41.514 Delete Endurance Group: Not Supported 00:15:41.514 Delete NVM Set: Not Supported 00:15:41.514 Extended LBA Formats Supported: Not Supported 00:15:41.514 Flexible Data Placement Supported: Not Supported 00:15:41.514 00:15:41.514 Controller Memory Buffer Support 00:15:41.514 ================================ 00:15:41.514 Supported: No 00:15:41.514 00:15:41.514 Persistent Memory Region Support 00:15:41.514 ================================ 00:15:41.514 Supported: No 00:15:41.514 00:15:41.514 Admin Command Set Attributes 00:15:41.514 ============================ 00:15:41.514 Security Send/Receive: Not Supported 00:15:41.514 Format NVM: Not Supported 00:15:41.514 Firmware Activate/Download: Not Supported 00:15:41.514 Namespace Management: Not Supported 00:15:41.514 Device Self-Test: Not Supported 00:15:41.514 Directives: Not Supported 00:15:41.514 NVMe-MI: Not Supported 00:15:41.514 Virtualization Management: Not Supported 00:15:41.514 Doorbell Buffer Config: Not Supported 00:15:41.514 Get LBA Status Capability: Not Supported 00:15:41.514 Command & Feature Lockdown Capability: Not Supported 00:15:41.514 Abort Command Limit: 4 00:15:41.514 Async Event Request Limit: 4 00:15:41.514 Number of Firmware Slots: N/A 00:15:41.514 Firmware Slot 1 Read-Only: N/A 00:15:41.514 Firmware Activation Without Reset: N/A 00:15:41.514 Multiple Update Detection Support: N/A 00:15:41.514 Firmware Update Granularity: No Information Provided 00:15:41.514 Per-Namespace SMART Log: No 00:15:41.514 Asymmetric Namespace Access Log Page: Not Supported 00:15:41.514 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:41.514 Command Effects Log Page: Supported 00:15:41.514 Get Log Page Extended Data: Supported 00:15:41.514 Telemetry Log Pages: Not Supported 00:15:41.514 Persistent Event Log Pages: Not Supported 00:15:41.514 Supported Log Pages Log Page: May Support 00:15:41.514 Commands Supported & Effects Log Page: Not Supported 00:15:41.514 Feature Identifiers & Effects Log Page:May Support 00:15:41.514 NVMe-MI Commands & Effects Log Page: May Support 00:15:41.514 Data Area 4 for Telemetry Log: Not Supported 00:15:41.514 Error Log Page Entries Supported: 128 00:15:41.514 Keep Alive: Supported 00:15:41.514 Keep Alive Granularity: 10000 ms 00:15:41.514 00:15:41.514 NVM Command Set Attributes 00:15:41.514 ========================== 00:15:41.514 Submission Queue Entry Size 00:15:41.514 Max: 64 00:15:41.514 Min: 64 00:15:41.514 Completion Queue Entry Size 00:15:41.514 Max: 16 00:15:41.514 Min: 16 00:15:41.514 Number of Namespaces: 32 00:15:41.514 Compare Command: Supported 00:15:41.514 Write Uncorrectable Command: Not Supported 00:15:41.514 Dataset Management Command: Supported 00:15:41.514 Write Zeroes Command: Supported 00:15:41.514 Set Features Save Field: Not Supported 00:15:41.514 Reservations: Supported 00:15:41.514 Timestamp: Not Supported 00:15:41.514 Copy: Supported 00:15:41.514 Volatile Write Cache: Present 00:15:41.514 Atomic Write Unit (Normal): 1 00:15:41.514 Atomic Write Unit (PFail): 1 00:15:41.514 Atomic Compare & Write Unit: 1 00:15:41.514 Fused Compare & Write: Supported 00:15:41.514 Scatter-Gather List 00:15:41.514 SGL Command Set: Supported 00:15:41.514 SGL Keyed: Supported 00:15:41.514 SGL Bit Bucket Descriptor: Not Supported 00:15:41.515 SGL Metadata Pointer: Not Supported 00:15:41.515 Oversized SGL: Not Supported 00:15:41.515 SGL Metadata Address: Not Supported 00:15:41.515 SGL Offset: Supported 00:15:41.515 Transport SGL Data Block: Not Supported 00:15:41.515 Replay Protected Memory Block: Not Supported 00:15:41.515 00:15:41.515 Firmware Slot Information 00:15:41.515 ========================= 00:15:41.515 Active slot: 1 00:15:41.515 Slot 1 Firmware Revision: 25.01 00:15:41.515 00:15:41.515 00:15:41.515 Commands Supported and Effects 00:15:41.515 ============================== 00:15:41.515 Admin Commands 00:15:41.515 -------------- 00:15:41.515 Get Log Page (02h): Supported 00:15:41.515 Identify (06h): Supported 00:15:41.515 Abort (08h): Supported 00:15:41.515 Set Features (09h): Supported 00:15:41.515 Get Features (0Ah): Supported 00:15:41.515 Asynchronous Event Request (0Ch): Supported 00:15:41.515 Keep Alive (18h): Supported 00:15:41.515 I/O Commands 00:15:41.515 ------------ 00:15:41.515 Flush (00h): Supported LBA-Change 00:15:41.515 Write (01h): Supported LBA-Change 00:15:41.515 Read (02h): Supported 00:15:41.515 Compare (05h): Supported 00:15:41.515 Write Zeroes (08h): Supported LBA-Change 00:15:41.515 Dataset Management (09h): Supported LBA-Change 00:15:41.515 Copy (19h): Supported LBA-Change 00:15:41.515 00:15:41.515 Error Log 00:15:41.515 ========= 00:15:41.515 00:15:41.515 Arbitration 00:15:41.515 =========== 00:15:41.515 Arbitration Burst: 1 00:15:41.515 00:15:41.515 Power Management 00:15:41.515 ================ 00:15:41.515 Number of Power States: 1 00:15:41.515 Current Power State: Power State #0 00:15:41.515 Power State #0: 00:15:41.515 Max Power: 0.00 W 00:15:41.515 Non-Operational State: Operational 00:15:41.515 Entry Latency: Not Reported 00:15:41.515 Exit Latency: Not Reported 00:15:41.515 Relative Read Throughput: 0 00:15:41.515 Relative Read Latency: 0 00:15:41.515 Relative Write Throughput: 0 00:15:41.515 Relative Write Latency: 0 00:15:41.515 Idle Power: Not Reported 00:15:41.515 Active Power: Not Reported 00:15:41.515 Non-Operational Permissive Mode: Not Supported 00:15:41.515 00:15:41.515 Health Information 00:15:41.515 ================== 00:15:41.515 Critical Warnings: 00:15:41.515 Available Spare Space: OK 00:15:41.515 Temperature: OK 00:15:41.515 Device Reliability: OK 00:15:41.515 Read Only: No 00:15:41.515 Volatile Memory Backup: OK 00:15:41.515 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:41.515 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:41.515 Available Spare: 0% 00:15:41.515 Available Spare Threshold: 0% 00:15:41.515 Life Percentage Used:[2024-11-20 16:03:39.507682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.507689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9cb750) 00:15:41.515 [2024-11-20 16:03:39.507698] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.515 [2024-11-20 16:03:39.507723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa301c0, cid 7, qid 0 00:15:41.515 [2024-11-20 16:03:39.507767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.515 [2024-11-20 16:03:39.507775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.515 [2024-11-20 16:03:39.507779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.507783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa301c0) on tqpair=0x9cb750 00:15:41.515 [2024-11-20 16:03:39.511852] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:15:41.515 [2024-11-20 16:03:39.511885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2f740) on tqpair=0x9cb750 00:15:41.515 [2024-11-20 16:03:39.511894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.515 [2024-11-20 16:03:39.511901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2f8c0) on tqpair=0x9cb750 00:15:41.515 [2024-11-20 16:03:39.511906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.515 [2024-11-20 16:03:39.511912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fa40) on tqpair=0x9cb750 00:15:41.515 [2024-11-20 16:03:39.511917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.515 [2024-11-20 16:03:39.511922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.515 [2024-11-20 16:03:39.511927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:41.515 [2024-11-20 16:03:39.511938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.511943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.511948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.515 [2024-11-20 16:03:39.511957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.515 [2024-11-20 16:03:39.511987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.515 [2024-11-20 16:03:39.512039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.515 [2024-11-20 16:03:39.512047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.515 [2024-11-20 16:03:39.512052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.512056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.515 [2024-11-20 16:03:39.512064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.512069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.512073] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.515 [2024-11-20 16:03:39.512081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.515 [2024-11-20 16:03:39.512104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.515 [2024-11-20 16:03:39.512174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.515 [2024-11-20 16:03:39.512181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.515 [2024-11-20 16:03:39.512185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.512190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.515 [2024-11-20 16:03:39.512195] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:15:41.515 [2024-11-20 16:03:39.512200] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:15:41.515 [2024-11-20 16:03:39.512211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.512215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.512220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.515 [2024-11-20 16:03:39.512227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.515 [2024-11-20 16:03:39.512246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.515 [2024-11-20 16:03:39.512301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.515 [2024-11-20 16:03:39.512313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.515 [2024-11-20 16:03:39.512318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.512322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.515 [2024-11-20 16:03:39.512335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.512340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.512344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.515 [2024-11-20 16:03:39.512352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.515 [2024-11-20 16:03:39.512371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.515 [2024-11-20 16:03:39.512414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.515 [2024-11-20 16:03:39.512426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.515 [2024-11-20 16:03:39.512430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.515 [2024-11-20 16:03:39.512434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.515 [2024-11-20 16:03:39.512446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.516 [2024-11-20 16:03:39.512463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.516 [2024-11-20 16:03:39.512482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.516 [2024-11-20 16:03:39.512524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.516 [2024-11-20 16:03:39.512531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.516 [2024-11-20 16:03:39.512535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.516 [2024-11-20 16:03:39.512550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.516 [2024-11-20 16:03:39.512566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.516 [2024-11-20 16:03:39.512585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.516 [2024-11-20 16:03:39.512627] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.516 [2024-11-20 16:03:39.512634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.516 [2024-11-20 16:03:39.512638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.516 [2024-11-20 16:03:39.512653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.516 [2024-11-20 16:03:39.512669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.516 [2024-11-20 16:03:39.512687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.516 [2024-11-20 16:03:39.512736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.516 [2024-11-20 16:03:39.512743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.516 [2024-11-20 16:03:39.512746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.516 [2024-11-20 16:03:39.512761] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.516 [2024-11-20 16:03:39.512778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.516 [2024-11-20 16:03:39.512796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.516 [2024-11-20 16:03:39.512859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.516 [2024-11-20 16:03:39.512868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.516 [2024-11-20 16:03:39.512872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.516 [2024-11-20 16:03:39.512888] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512897] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.516 [2024-11-20 16:03:39.512905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.516 [2024-11-20 16:03:39.512925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.516 [2024-11-20 16:03:39.512974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.516 [2024-11-20 16:03:39.512981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.516 [2024-11-20 16:03:39.512985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.512989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.516 [2024-11-20 16:03:39.513000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.516 [2024-11-20 16:03:39.513017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.516 [2024-11-20 16:03:39.513035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.516 [2024-11-20 16:03:39.513080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.516 [2024-11-20 16:03:39.513087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.516 [2024-11-20 16:03:39.513091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.516 [2024-11-20 16:03:39.513106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.516 [2024-11-20 16:03:39.513123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.516 [2024-11-20 16:03:39.513141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.516 [2024-11-20 16:03:39.513197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.516 [2024-11-20 16:03:39.513205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.516 [2024-11-20 16:03:39.513209] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513213] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.516 [2024-11-20 16:03:39.513224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.516 [2024-11-20 16:03:39.513241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.516 [2024-11-20 16:03:39.513261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.516 [2024-11-20 16:03:39.513306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.516 [2024-11-20 16:03:39.513314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.516 [2024-11-20 16:03:39.513317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.516 [2024-11-20 16:03:39.513333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.516 [2024-11-20 16:03:39.513350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.516 [2024-11-20 16:03:39.513368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.516 [2024-11-20 16:03:39.513414] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.516 [2024-11-20 16:03:39.513421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.516 [2024-11-20 16:03:39.513425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.516 [2024-11-20 16:03:39.513440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.516 [2024-11-20 16:03:39.513457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.516 [2024-11-20 16:03:39.513475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.516 [2024-11-20 16:03:39.513523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.516 [2024-11-20 16:03:39.513530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.516 [2024-11-20 16:03:39.513535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.516 [2024-11-20 16:03:39.513550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.516 [2024-11-20 16:03:39.513566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.516 [2024-11-20 16:03:39.513584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.516 [2024-11-20 16:03:39.513630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.516 [2024-11-20 16:03:39.513637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.516 [2024-11-20 16:03:39.513641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513646] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.516 [2024-11-20 16:03:39.513656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.516 [2024-11-20 16:03:39.513673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.516 [2024-11-20 16:03:39.513691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.516 [2024-11-20 16:03:39.513737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.516 [2024-11-20 16:03:39.513744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.516 [2024-11-20 16:03:39.513748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.516 [2024-11-20 16:03:39.513753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.517 [2024-11-20 16:03:39.513763] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.513768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.513773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.517 [2024-11-20 16:03:39.513780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.517 [2024-11-20 16:03:39.513798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.517 [2024-11-20 16:03:39.513861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.517 [2024-11-20 16:03:39.513870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.517 [2024-11-20 16:03:39.513874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.513879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.517 [2024-11-20 16:03:39.513890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.513895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.513900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.517 [2024-11-20 16:03:39.513908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.517 [2024-11-20 16:03:39.513928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.517 [2024-11-20 16:03:39.513973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.517 [2024-11-20 16:03:39.513981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.517 [2024-11-20 16:03:39.513986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.513990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.517 [2024-11-20 16:03:39.514001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514006] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.517 [2024-11-20 16:03:39.514018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.517 [2024-11-20 16:03:39.514037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.517 [2024-11-20 16:03:39.514079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.517 [2024-11-20 16:03:39.514091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.517 [2024-11-20 16:03:39.514095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.517 [2024-11-20 16:03:39.514111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.517 [2024-11-20 16:03:39.514128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.517 [2024-11-20 16:03:39.514147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.517 [2024-11-20 16:03:39.514195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.517 [2024-11-20 16:03:39.514202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.517 [2024-11-20 16:03:39.514206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.517 [2024-11-20 16:03:39.514221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.517 [2024-11-20 16:03:39.514238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.517 [2024-11-20 16:03:39.514257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.517 [2024-11-20 16:03:39.514301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.517 [2024-11-20 16:03:39.514310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.517 [2024-11-20 16:03:39.514314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.517 [2024-11-20 16:03:39.514329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.517 [2024-11-20 16:03:39.514346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.517 [2024-11-20 16:03:39.514364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.517 [2024-11-20 16:03:39.514410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.517 [2024-11-20 16:03:39.514421] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.517 [2024-11-20 16:03:39.514425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.517 [2024-11-20 16:03:39.514441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.517 [2024-11-20 16:03:39.514458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.517 [2024-11-20 16:03:39.514476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.517 [2024-11-20 16:03:39.514531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.517 [2024-11-20 16:03:39.514539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.517 [2024-11-20 16:03:39.514542] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.517 [2024-11-20 16:03:39.514558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.517 [2024-11-20 16:03:39.514575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.517 [2024-11-20 16:03:39.514593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.517 [2024-11-20 16:03:39.514635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.517 [2024-11-20 16:03:39.514642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.517 [2024-11-20 16:03:39.514646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.517 [2024-11-20 16:03:39.514661] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514670] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.517 [2024-11-20 16:03:39.514678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.517 [2024-11-20 16:03:39.514696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.517 [2024-11-20 16:03:39.514746] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.517 [2024-11-20 16:03:39.514758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.517 [2024-11-20 16:03:39.514762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.517 [2024-11-20 16:03:39.514778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.514787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.517 [2024-11-20 16:03:39.514795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.517 [2024-11-20 16:03:39.518822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.517 [2024-11-20 16:03:39.518853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.517 [2024-11-20 16:03:39.518862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.517 [2024-11-20 16:03:39.518866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.518871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.517 [2024-11-20 16:03:39.518886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.518892] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.518896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9cb750) 00:15:41.517 [2024-11-20 16:03:39.518906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:41.517 [2024-11-20 16:03:39.518933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa2fbc0, cid 3, qid 0 00:15:41.517 [2024-11-20 16:03:39.518985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:41.517 [2024-11-20 16:03:39.519000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:41.517 [2024-11-20 16:03:39.519004] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:41.517 [2024-11-20 16:03:39.519008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa2fbc0) on tqpair=0x9cb750 00:15:41.517 [2024-11-20 16:03:39.519017] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:15:41.517 0% 00:15:41.517 Data Units Read: 0 00:15:41.517 Data Units Written: 0 00:15:41.517 Host Read Commands: 0 00:15:41.517 Host Write Commands: 0 00:15:41.517 Controller Busy Time: 0 minutes 00:15:41.517 Power Cycles: 0 00:15:41.517 Power On Hours: 0 hours 00:15:41.517 Unsafe Shutdowns: 0 00:15:41.517 Unrecoverable Media Errors: 0 00:15:41.517 Lifetime Error Log Entries: 0 00:15:41.517 Warning Temperature Time: 0 minutes 00:15:41.517 Critical Temperature Time: 0 minutes 00:15:41.517 00:15:41.517 Number of Queues 00:15:41.517 ================ 00:15:41.518 Number of I/O Submission Queues: 127 00:15:41.518 Number of I/O Completion Queues: 127 00:15:41.518 00:15:41.518 Active Namespaces 00:15:41.518 ================= 00:15:41.518 Namespace ID:1 00:15:41.518 Error Recovery Timeout: Unlimited 00:15:41.518 Command Set Identifier: NVM (00h) 00:15:41.518 Deallocate: Supported 00:15:41.518 Deallocated/Unwritten Error: Not Supported 00:15:41.518 Deallocated Read Value: Unknown 00:15:41.518 Deallocate in Write Zeroes: Not Supported 00:15:41.518 Deallocated Guard Field: 0xFFFF 00:15:41.518 Flush: Supported 00:15:41.518 Reservation: Supported 00:15:41.518 Namespace Sharing Capabilities: Multiple Controllers 00:15:41.518 Size (in LBAs): 131072 (0GiB) 00:15:41.518 Capacity (in LBAs): 131072 (0GiB) 00:15:41.518 Utilization (in LBAs): 131072 (0GiB) 00:15:41.518 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:41.518 EUI64: ABCDEF0123456789 00:15:41.518 UUID: 584f0bf8-ddc8-4e86-a834-7aa6b62a0c2f 00:15:41.518 Thin Provisioning: Not Supported 00:15:41.518 Per-NS Atomic Units: Yes 00:15:41.518 Atomic Boundary Size (Normal): 0 00:15:41.518 Atomic Boundary Size (PFail): 0 00:15:41.518 Atomic Boundary Offset: 0 00:15:41.518 Maximum Single Source Range Length: 65535 00:15:41.518 Maximum Copy Length: 65535 00:15:41.518 Maximum Source Range Count: 1 00:15:41.518 NGUID/EUI64 Never Reused: No 00:15:41.518 Namespace Write Protected: No 00:15:41.518 Number of LBA Formats: 1 00:15:41.518 Current LBA Format: LBA Format #00 00:15:41.518 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:41.518 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:41.518 rmmod nvme_tcp 00:15:41.518 rmmod nvme_fabrics 00:15:41.518 rmmod nvme_keyring 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 74535 ']' 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 74535 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 74535 ']' 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 74535 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74535 00:15:41.518 killing process with pid 74535 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74535' 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 74535 00:15:41.518 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 74535 00:15:41.777 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:41.777 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:41.777 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:41.777 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:15:41.778 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:15:41.778 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:41.778 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:15:41.778 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:41.778 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:41.778 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:41.778 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:41.778 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:41.778 16:03:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.778 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:41.778 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:41.778 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:41.778 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:42.036 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:42.036 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:42.036 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:42.036 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.036 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.036 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:42.036 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.036 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.037 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.037 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:15:42.037 00:15:42.037 real 0m2.374s 00:15:42.037 user 0m5.005s 00:15:42.037 sys 0m0.750s 00:15:42.037 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.037 16:03:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:42.037 ************************************ 00:15:42.037 END TEST nvmf_identify 00:15:42.037 ************************************ 00:15:42.037 16:03:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:42.037 16:03:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:42.037 16:03:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.037 16:03:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:42.037 ************************************ 00:15:42.037 START TEST nvmf_perf 00:15:42.037 ************************************ 00:15:42.037 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:42.346 * Looking for test storage... 00:15:42.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:42.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.346 --rc genhtml_branch_coverage=1 00:15:42.346 --rc genhtml_function_coverage=1 00:15:42.346 --rc genhtml_legend=1 00:15:42.346 --rc geninfo_all_blocks=1 00:15:42.346 --rc geninfo_unexecuted_blocks=1 00:15:42.346 00:15:42.346 ' 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:42.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.346 --rc genhtml_branch_coverage=1 00:15:42.346 --rc genhtml_function_coverage=1 00:15:42.346 --rc genhtml_legend=1 00:15:42.346 --rc geninfo_all_blocks=1 00:15:42.346 --rc geninfo_unexecuted_blocks=1 00:15:42.346 00:15:42.346 ' 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:42.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.346 --rc genhtml_branch_coverage=1 00:15:42.346 --rc genhtml_function_coverage=1 00:15:42.346 --rc genhtml_legend=1 00:15:42.346 --rc geninfo_all_blocks=1 00:15:42.346 --rc geninfo_unexecuted_blocks=1 00:15:42.346 00:15:42.346 ' 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:42.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:42.346 --rc genhtml_branch_coverage=1 00:15:42.346 --rc genhtml_function_coverage=1 00:15:42.346 --rc genhtml_legend=1 00:15:42.346 --rc geninfo_all_blocks=1 00:15:42.346 --rc geninfo_unexecuted_blocks=1 00:15:42.346 00:15:42.346 ' 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.346 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:42.347 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:42.347 Cannot find device "nvmf_init_br" 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:42.347 Cannot find device "nvmf_init_br2" 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:42.347 Cannot find device "nvmf_tgt_br" 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:42.347 Cannot find device "nvmf_tgt_br2" 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:42.347 Cannot find device "nvmf_init_br" 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:42.347 Cannot find device "nvmf_init_br2" 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:42.347 Cannot find device "nvmf_tgt_br" 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:42.347 Cannot find device "nvmf_tgt_br2" 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:42.347 Cannot find device "nvmf_br" 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:15:42.347 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:42.606 Cannot find device "nvmf_init_if" 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:42.606 Cannot find device "nvmf_init_if2" 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:42.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:42.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:42.606 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:42.864 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:42.864 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:42.864 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:42.864 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:42.864 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:42.865 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:42.865 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:15:42.865 00:15:42.865 --- 10.0.0.3 ping statistics --- 00:15:42.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.865 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:42.865 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:42.865 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:15:42.865 00:15:42.865 --- 10.0.0.4 ping statistics --- 00:15:42.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.865 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:42.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:42.865 00:15:42.865 --- 10.0.0.1 ping statistics --- 00:15:42.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.865 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:42.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:42.865 00:15:42.865 --- 10.0.0.2 ping statistics --- 00:15:42.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.865 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74794 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74794 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 74794 ']' 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.865 16:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:42.865 [2024-11-20 16:03:40.984518] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:42.865 [2024-11-20 16:03:40.984625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.123 [2024-11-20 16:03:41.139535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.123 [2024-11-20 16:03:41.215397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.123 [2024-11-20 16:03:41.215722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.123 [2024-11-20 16:03:41.215939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.123 [2024-11-20 16:03:41.216157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.123 [2024-11-20 16:03:41.216267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.123 [2024-11-20 16:03:41.217669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.123 [2024-11-20 16:03:41.217727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.123 [2024-11-20 16:03:41.217791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.123 [2024-11-20 16:03:41.217942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.123 [2024-11-20 16:03:41.279105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:43.123 16:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.123 16:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:15:43.123 16:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:43.123 16:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:43.123 16:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:43.381 16:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.381 16:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:43.381 16:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:43.639 16:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:43.639 16:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:44.206 16:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:44.206 16:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:44.465 16:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:44.465 16:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:44.465 16:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:44.465 16:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:44.465 16:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:44.723 [2024-11-20 16:03:42.753979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:44.723 16:03:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:44.981 16:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:44.981 16:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:45.239 16:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:45.239 16:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:45.497 16:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:45.755 [2024-11-20 16:03:43.900031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:45.755 16:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:46.014 16:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:46.014 16:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:46.014 16:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:46.014 16:03:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:47.392 Initializing NVMe Controllers 00:15:47.392 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:47.392 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:47.392 Initialization complete. Launching workers. 00:15:47.392 ======================================================== 00:15:47.392 Latency(us) 00:15:47.392 Device Information : IOPS MiB/s Average min max 00:15:47.392 PCIE (0000:00:10.0) NSID 1 from core 0: 24224.00 94.62 1320.50 361.87 5948.06 00:15:47.392 ======================================================== 00:15:47.392 Total : 24224.00 94.62 1320.50 361.87 5948.06 00:15:47.392 00:15:47.392 16:03:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:48.769 Initializing NVMe Controllers 00:15:48.769 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:48.769 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:48.769 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:48.769 Initialization complete. Launching workers. 00:15:48.769 ======================================================== 00:15:48.769 Latency(us) 00:15:48.769 Device Information : IOPS MiB/s Average min max 00:15:48.769 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3707.82 14.48 269.37 109.33 7179.62 00:15:48.769 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.99 0.48 8128.37 5061.60 12002.78 00:15:48.769 ======================================================== 00:15:48.769 Total : 3831.81 14.97 523.68 109.33 12002.78 00:15:48.769 00:15:48.769 16:03:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:50.161 Initializing NVMe Controllers 00:15:50.161 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:50.161 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:50.161 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:50.161 Initialization complete. Launching workers. 00:15:50.161 ======================================================== 00:15:50.161 Latency(us) 00:15:50.161 Device Information : IOPS MiB/s Average min max 00:15:50.161 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8613.70 33.65 3716.98 707.62 9597.70 00:15:50.161 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3939.57 15.39 8161.84 6637.36 16900.35 00:15:50.161 ======================================================== 00:15:50.161 Total : 12553.26 49.04 5111.90 707.62 16900.35 00:15:50.161 00:15:50.161 16:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:50.161 16:03:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:52.691 Initializing NVMe Controllers 00:15:52.691 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:52.691 Controller IO queue size 128, less than required. 00:15:52.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:52.691 Controller IO queue size 128, less than required. 00:15:52.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:52.691 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:52.691 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:52.691 Initialization complete. Launching workers. 00:15:52.691 ======================================================== 00:15:52.691 Latency(us) 00:15:52.691 Device Information : IOPS MiB/s Average min max 00:15:52.691 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1578.89 394.72 81961.43 52818.17 130307.98 00:15:52.691 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 672.03 168.01 197195.73 64611.88 310314.78 00:15:52.691 ======================================================== 00:15:52.691 Total : 2250.91 562.73 116365.45 52818.17 310314.78 00:15:52.691 00:15:52.691 16:03:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:15:52.691 Initializing NVMe Controllers 00:15:52.691 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:52.691 Controller IO queue size 128, less than required. 00:15:52.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:52.691 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:52.691 Controller IO queue size 128, less than required. 00:15:52.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:52.691 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:52.691 WARNING: Some requested NVMe devices were skipped 00:15:52.691 No valid NVMe controllers or AIO or URING devices found 00:15:52.691 16:03:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:15:55.223 Initializing NVMe Controllers 00:15:55.223 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:55.223 Controller IO queue size 128, less than required. 00:15:55.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:55.223 Controller IO queue size 128, less than required. 00:15:55.223 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:55.223 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:55.223 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:55.223 Initialization complete. Launching workers. 00:15:55.223 00:15:55.223 ==================== 00:15:55.223 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:55.223 TCP transport: 00:15:55.223 polls: 10440 00:15:55.223 idle_polls: 7122 00:15:55.223 sock_completions: 3318 00:15:55.223 nvme_completions: 6169 00:15:55.223 submitted_requests: 9246 00:15:55.223 queued_requests: 1 00:15:55.223 00:15:55.223 ==================== 00:15:55.223 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:55.223 TCP transport: 00:15:55.223 polls: 10456 00:15:55.223 idle_polls: 6293 00:15:55.223 sock_completions: 4163 00:15:55.223 nvme_completions: 6735 00:15:55.223 submitted_requests: 10140 00:15:55.223 queued_requests: 1 00:15:55.223 ======================================================== 00:15:55.223 Latency(us) 00:15:55.223 Device Information : IOPS MiB/s Average min max 00:15:55.223 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1541.89 385.47 84456.30 45609.01 153787.27 00:15:55.223 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1683.38 420.84 76240.98 39438.48 130635.57 00:15:55.223 ======================================================== 00:15:55.223 Total : 3225.27 806.32 80168.44 39438.48 153787.27 00:15:55.223 00:15:55.223 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:55.223 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:55.482 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:55.482 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:55.482 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:55.482 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:55.482 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:15:55.482 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:55.482 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:15:55.482 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:55.482 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:55.482 rmmod nvme_tcp 00:15:55.482 rmmod nvme_fabrics 00:15:55.482 rmmod nvme_keyring 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74794 ']' 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74794 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 74794 ']' 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 74794 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74794 00:15:55.740 killing process with pid 74794 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74794' 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 74794 00:15:55.740 16:03:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 74794 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:15:56.676 00:15:56.676 real 0m14.592s 00:15:56.676 user 0m52.088s 00:15:56.676 sys 0m4.198s 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.676 ************************************ 00:15:56.676 END TEST nvmf_perf 00:15:56.676 ************************************ 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:56.676 ************************************ 00:15:56.676 START TEST nvmf_fio_host 00:15:56.676 ************************************ 00:15:56.676 16:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:56.936 * Looking for test storage... 00:15:56.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:56.937 16:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:56.937 16:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:15:56.937 16:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:56.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.937 --rc genhtml_branch_coverage=1 00:15:56.937 --rc genhtml_function_coverage=1 00:15:56.937 --rc genhtml_legend=1 00:15:56.937 --rc geninfo_all_blocks=1 00:15:56.937 --rc geninfo_unexecuted_blocks=1 00:15:56.937 00:15:56.937 ' 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:56.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.937 --rc genhtml_branch_coverage=1 00:15:56.937 --rc genhtml_function_coverage=1 00:15:56.937 --rc genhtml_legend=1 00:15:56.937 --rc geninfo_all_blocks=1 00:15:56.937 --rc geninfo_unexecuted_blocks=1 00:15:56.937 00:15:56.937 ' 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:56.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.937 --rc genhtml_branch_coverage=1 00:15:56.937 --rc genhtml_function_coverage=1 00:15:56.937 --rc genhtml_legend=1 00:15:56.937 --rc geninfo_all_blocks=1 00:15:56.937 --rc geninfo_unexecuted_blocks=1 00:15:56.937 00:15:56.937 ' 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:56.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.937 --rc genhtml_branch_coverage=1 00:15:56.937 --rc genhtml_function_coverage=1 00:15:56.937 --rc genhtml_legend=1 00:15:56.937 --rc geninfo_all_blocks=1 00:15:56.937 --rc geninfo_unexecuted_blocks=1 00:15:56.937 00:15:56.937 ' 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.937 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.938 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:56.938 Cannot find device "nvmf_init_br" 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:56.938 Cannot find device "nvmf_init_br2" 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:56.938 Cannot find device "nvmf_tgt_br" 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.938 Cannot find device "nvmf_tgt_br2" 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:15:56.938 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:56.938 Cannot find device "nvmf_init_br" 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:57.197 Cannot find device "nvmf_init_br2" 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:57.197 Cannot find device "nvmf_tgt_br" 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:57.197 Cannot find device "nvmf_tgt_br2" 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:57.197 Cannot find device "nvmf_br" 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:57.197 Cannot find device "nvmf_init_if" 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:57.197 Cannot find device "nvmf_init_if2" 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:57.197 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:57.198 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:57.456 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.456 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:57.456 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:57.456 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:57.456 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:15:57.456 00:15:57.456 --- 10.0.0.3 ping statistics --- 00:15:57.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.456 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:57.456 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:57.456 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:57.456 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:15:57.456 00:15:57.456 --- 10.0.0.4 ping statistics --- 00:15:57.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.456 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:57.456 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:57.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:57.456 00:15:57.456 --- 10.0.0.1 ping statistics --- 00:15:57.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.456 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:57.456 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:57.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:15:57.457 00:15:57.457 --- 10.0.0.2 ping statistics --- 00:15:57.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.457 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75261 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75261 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 75261 ']' 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.457 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:57.457 [2024-11-20 16:03:55.566662] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:15:57.457 [2024-11-20 16:03:55.567318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.715 [2024-11-20 16:03:55.721143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.715 [2024-11-20 16:03:55.778655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.715 [2024-11-20 16:03:55.778974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.715 [2024-11-20 16:03:55.779074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.715 [2024-11-20 16:03:55.779162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.715 [2024-11-20 16:03:55.779268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.715 [2024-11-20 16:03:55.780688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.715 [2024-11-20 16:03:55.780840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.715 [2024-11-20 16:03:55.780949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.715 [2024-11-20 16:03:55.780951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.715 [2024-11-20 16:03:55.840314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:57.715 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.715 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:15:57.715 16:03:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:57.973 [2024-11-20 16:03:56.191781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.973 16:03:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:57.973 16:03:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:57.973 16:03:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:58.231 16:03:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:58.488 Malloc1 00:15:58.488 16:03:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:58.746 16:03:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:59.004 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:59.004 [2024-11-20 16:03:57.236368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:59.262 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:59.521 16:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:15:59.521 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:59.521 fio-3.35 00:15:59.521 Starting 1 thread 00:16:02.055 00:16:02.055 test: (groupid=0, jobs=1): err= 0: pid=75331: Wed Nov 20 16:04:00 2024 00:16:02.055 read: IOPS=8658, BW=33.8MiB/s (35.5MB/s)(67.9MiB/2007msec) 00:16:02.055 slat (nsec): min=1999, max=286481, avg=2755.59, stdev=3336.52 00:16:02.055 clat (usec): min=2182, max=14636, avg=7691.88, stdev=641.82 00:16:02.055 lat (usec): min=2254, max=14638, avg=7694.63, stdev=641.62 00:16:02.055 clat percentiles (usec): 00:16:02.055 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:16:02.055 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7832], 00:16:02.055 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8717], 00:16:02.055 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[12649], 99.95th=[13566], 00:16:02.055 | 99.99th=[14615] 00:16:02.055 bw ( KiB/s): min=34024, max=35472, per=99.93%, avg=34610.00, stdev=633.65, samples=4 00:16:02.055 iops : min= 8506, max= 8868, avg=8652.50, stdev=158.41, samples=4 00:16:02.055 write: IOPS=8649, BW=33.8MiB/s (35.4MB/s)(67.8MiB/2007msec); 0 zone resets 00:16:02.055 slat (usec): min=2, max=221, avg= 2.86, stdev= 2.70 00:16:02.055 clat (usec): min=2042, max=13476, avg=7036.44, stdev=575.85 00:16:02.055 lat (usec): min=2053, max=13478, avg=7039.30, stdev=575.82 00:16:02.055 clat percentiles (usec): 00:16:02.055 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6587], 00:16:02.055 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:16:02.055 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7767], 95.00th=[ 7963], 00:16:02.055 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[11469], 99.95th=[11994], 00:16:02.055 | 99.99th=[13042] 00:16:02.055 bw ( KiB/s): min=34296, max=35080, per=100.00%, avg=34610.00, stdev=342.38, samples=4 00:16:02.055 iops : min= 8574, max= 8770, avg=8652.50, stdev=85.59, samples=4 00:16:02.055 lat (msec) : 4=0.12%, 10=99.69%, 20=0.19% 00:16:02.055 cpu : usr=68.79%, sys=23.18%, ctx=271, majf=0, minf=7 00:16:02.055 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:02.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.056 issued rwts: total=17378,17359,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.056 00:16:02.056 Run status group 0 (all jobs): 00:16:02.056 READ: bw=33.8MiB/s (35.5MB/s), 33.8MiB/s-33.8MiB/s (35.5MB/s-35.5MB/s), io=67.9MiB (71.2MB), run=2007-2007msec 00:16:02.056 WRITE: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=67.8MiB (71.1MB), run=2007-2007msec 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:02.056 16:04:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:16:02.056 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:02.056 fio-3.35 00:16:02.056 Starting 1 thread 00:16:04.611 00:16:04.611 test: (groupid=0, jobs=1): err= 0: pid=75376: Wed Nov 20 16:04:02 2024 00:16:04.611 read: IOPS=8161, BW=128MiB/s (134MB/s)(256MiB/2004msec) 00:16:04.611 slat (usec): min=2, max=122, avg= 3.80, stdev= 2.41 00:16:04.611 clat (usec): min=2172, max=17380, avg=8779.74, stdev=2770.15 00:16:04.611 lat (usec): min=2176, max=17383, avg=8783.54, stdev=2770.20 00:16:04.611 clat percentiles (usec): 00:16:04.611 | 1.00th=[ 4015], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 6325], 00:16:04.611 | 30.00th=[ 7046], 40.00th=[ 7701], 50.00th=[ 8455], 60.00th=[ 9241], 00:16:04.611 | 70.00th=[10159], 80.00th=[10814], 90.00th=[12518], 95.00th=[13960], 00:16:04.611 | 99.00th=[16319], 99.50th=[16581], 99.90th=[16909], 99.95th=[17171], 00:16:04.611 | 99.99th=[17171] 00:16:04.611 bw ( KiB/s): min=61696, max=68800, per=49.84%, avg=65080.00, stdev=3359.89, samples=4 00:16:04.611 iops : min= 3856, max= 4300, avg=4067.50, stdev=209.99, samples=4 00:16:04.611 write: IOPS=4693, BW=73.3MiB/s (76.9MB/s)(133MiB/1815msec); 0 zone resets 00:16:04.611 slat (usec): min=31, max=385, avg=38.72, stdev= 9.56 00:16:04.611 clat (usec): min=6344, max=20001, avg=12441.13, stdev=2330.34 00:16:04.611 lat (usec): min=6379, max=20047, avg=12479.84, stdev=2331.28 00:16:04.611 clat percentiles (usec): 00:16:04.611 | 1.00th=[ 8160], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[10421], 00:16:04.611 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12125], 60.00th=[12780], 00:16:04.611 | 70.00th=[13566], 80.00th=[14484], 90.00th=[15926], 95.00th=[16712], 00:16:04.611 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19006], 99.95th=[19268], 00:16:04.611 | 99.99th=[20055] 00:16:04.611 bw ( KiB/s): min=62656, max=72256, per=90.31%, avg=67816.00, stdev=4328.52, samples=4 00:16:04.611 iops : min= 3916, max= 4516, avg=4238.50, stdev=270.53, samples=4 00:16:04.611 lat (msec) : 4=0.62%, 10=48.44%, 20=50.93%, 50=0.01% 00:16:04.611 cpu : usr=82.18%, sys=13.78%, ctx=3, majf=0, minf=14 00:16:04.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:04.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.611 issued rwts: total=16355,8518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.611 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.611 00:16:04.611 Run status group 0 (all jobs): 00:16:04.611 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=256MiB (268MB), run=2004-2004msec 00:16:04.611 WRITE: bw=73.3MiB/s (76.9MB/s), 73.3MiB/s-73.3MiB/s (76.9MB/s-76.9MB/s), io=133MiB (140MB), run=1815-1815msec 00:16:04.611 16:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:04.869 16:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:04.869 16:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:04.869 16:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:04.869 16:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:04.869 16:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:04.869 16:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:16:04.869 16:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:04.869 16:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:16:04.869 16:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:04.869 16:04:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:04.869 rmmod nvme_tcp 00:16:04.869 rmmod nvme_fabrics 00:16:04.869 rmmod nvme_keyring 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 75261 ']' 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 75261 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 75261 ']' 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 75261 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75261 00:16:04.869 killing process with pid 75261 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75261' 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 75261 00:16:04.869 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 75261 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:05.127 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:16:05.386 00:16:05.386 real 0m8.666s 00:16:05.386 user 0m34.324s 00:16:05.386 sys 0m2.391s 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.386 ************************************ 00:16:05.386 END TEST nvmf_fio_host 00:16:05.386 ************************************ 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:05.386 ************************************ 00:16:05.386 START TEST nvmf_failover 00:16:05.386 ************************************ 00:16:05.386 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:05.645 * Looking for test storage... 00:16:05.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:05.645 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:05.645 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:16:05.645 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:05.645 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:05.645 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.645 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.645 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.645 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.645 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.645 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:05.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.646 --rc genhtml_branch_coverage=1 00:16:05.646 --rc genhtml_function_coverage=1 00:16:05.646 --rc genhtml_legend=1 00:16:05.646 --rc geninfo_all_blocks=1 00:16:05.646 --rc geninfo_unexecuted_blocks=1 00:16:05.646 00:16:05.646 ' 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:05.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.646 --rc genhtml_branch_coverage=1 00:16:05.646 --rc genhtml_function_coverage=1 00:16:05.646 --rc genhtml_legend=1 00:16:05.646 --rc geninfo_all_blocks=1 00:16:05.646 --rc geninfo_unexecuted_blocks=1 00:16:05.646 00:16:05.646 ' 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:05.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.646 --rc genhtml_branch_coverage=1 00:16:05.646 --rc genhtml_function_coverage=1 00:16:05.646 --rc genhtml_legend=1 00:16:05.646 --rc geninfo_all_blocks=1 00:16:05.646 --rc geninfo_unexecuted_blocks=1 00:16:05.646 00:16:05.646 ' 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:05.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.646 --rc genhtml_branch_coverage=1 00:16:05.646 --rc genhtml_function_coverage=1 00:16:05.646 --rc genhtml_legend=1 00:16:05.646 --rc geninfo_all_blocks=1 00:16:05.646 --rc geninfo_unexecuted_blocks=1 00:16:05.646 00:16:05.646 ' 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:05.646 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:05.646 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:05.647 Cannot find device "nvmf_init_br" 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:05.647 Cannot find device "nvmf_init_br2" 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:05.647 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:05.647 Cannot find device "nvmf_tgt_br" 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:05.906 Cannot find device "nvmf_tgt_br2" 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:05.906 Cannot find device "nvmf_init_br" 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:05.906 Cannot find device "nvmf_init_br2" 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:05.906 Cannot find device "nvmf_tgt_br" 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:05.906 Cannot find device "nvmf_tgt_br2" 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:05.906 Cannot find device "nvmf_br" 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:05.906 Cannot find device "nvmf_init_if" 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:05.906 Cannot find device "nvmf_init_if2" 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:05.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:05.906 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:16:05.906 16:04:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:05.906 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:06.165 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:06.165 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:16:06.165 00:16:06.165 --- 10.0.0.3 ping statistics --- 00:16:06.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.165 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:16:06.165 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:06.165 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:06.165 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:16:06.165 00:16:06.165 --- 10.0.0.4 ping statistics --- 00:16:06.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.165 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:06.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:06.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:06.166 00:16:06.166 --- 10.0.0.1 ping statistics --- 00:16:06.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.166 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:06.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:06.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:06.166 00:16:06.166 --- 10.0.0.2 ping statistics --- 00:16:06.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:06.166 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=75642 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 75642 00:16:06.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75642 ']' 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.166 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:06.166 [2024-11-20 16:04:04.374006] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:06.166 [2024-11-20 16:04:04.374312] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.424 [2024-11-20 16:04:04.529434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:06.424 [2024-11-20 16:04:04.596928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:06.424 [2024-11-20 16:04:04.596984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:06.424 [2024-11-20 16:04:04.597000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:06.424 [2024-11-20 16:04:04.597011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:06.424 [2024-11-20 16:04:04.597020] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:06.424 [2024-11-20 16:04:04.598332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.424 [2024-11-20 16:04:04.598438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.424 [2024-11-20 16:04:04.598446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.424 [2024-11-20 16:04:04.657120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:06.682 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.682 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:06.683 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:06.683 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:06.683 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:06.683 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.683 16:04:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:06.940 [2024-11-20 16:04:05.067221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:06.940 16:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:07.198 Malloc0 00:16:07.198 16:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:07.765 16:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:08.023 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:08.023 [2024-11-20 16:04:06.244226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:08.023 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:08.282 [2024-11-20 16:04:06.488410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:08.282 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:08.541 [2024-11-20 16:04:06.736677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:08.541 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:08.541 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75702 00:16:08.541 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:08.541 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75702 /var/tmp/bdevperf.sock 00:16:08.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:08.541 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75702 ']' 00:16:08.541 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:08.541 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.541 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:08.541 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.541 16:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:09.107 16:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.107 16:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:09.107 16:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:09.365 NVMe0n1 00:16:09.365 16:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:09.623 00:16:09.623 16:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75715 00:16:09.623 16:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:09.623 16:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:10.557 16:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:11.126 16:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:14.433 16:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:14.433 00:16:14.433 16:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:14.691 16:04:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:17.982 16:04:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:17.982 [2024-11-20 16:04:15.959476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:17.982 16:04:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:18.972 16:04:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:19.230 16:04:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75715 00:16:25.805 { 00:16:25.805 "results": [ 00:16:25.805 { 00:16:25.805 "job": "NVMe0n1", 00:16:25.805 "core_mask": "0x1", 00:16:25.805 "workload": "verify", 00:16:25.805 "status": "finished", 00:16:25.805 "verify_range": { 00:16:25.805 "start": 0, 00:16:25.805 "length": 16384 00:16:25.805 }, 00:16:25.805 "queue_depth": 128, 00:16:25.805 "io_size": 4096, 00:16:25.805 "runtime": 15.009416, 00:16:25.805 "iops": 8006.9071308304065, 00:16:25.805 "mibps": 31.276980979806275, 00:16:25.805 "io_failed": 3093, 00:16:25.805 "io_timeout": 0, 00:16:25.805 "avg_latency_us": 15551.643380078938, 00:16:25.805 "min_latency_us": 636.7418181818182, 00:16:25.805 "max_latency_us": 25618.618181818183 00:16:25.805 } 00:16:25.805 ], 00:16:25.805 "core_count": 1 00:16:25.805 } 00:16:25.805 16:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75702 00:16:25.805 16:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75702 ']' 00:16:25.805 16:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75702 00:16:25.805 16:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:25.805 16:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.805 16:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75702 00:16:25.805 killing process with pid 75702 00:16:25.805 16:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.805 16:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.805 16:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75702' 00:16:25.805 16:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75702 00:16:25.805 16:04:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75702 00:16:25.805 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:25.805 [2024-11-20 16:04:06.802310] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:25.805 [2024-11-20 16:04:06.802415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75702 ] 00:16:25.805 [2024-11-20 16:04:06.949356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.805 [2024-11-20 16:04:07.012713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.805 [2024-11-20 16:04:07.073103] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:25.805 Running I/O for 15 seconds... 00:16:25.805 7211.00 IOPS, 28.17 MiB/s [2024-11-20T16:04:24.055Z] [2024-11-20 16:04:09.055509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.055587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.055621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.055638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.055654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.055669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.055696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.055711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.055727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.055742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.055766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.055780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.055796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.055824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.055844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.055865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.055881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.055896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.055911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.055926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.055941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.056010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.056029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.056043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.056059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.056073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.056089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.056103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.056119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.056133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.056148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.056163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.056178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.056193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.056224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.056245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.056263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.056292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.056309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.805 [2024-11-20 16:04:09.056323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.805 [2024-11-20 16:04:09.056339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.056979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.056992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.806 [2024-11-20 16:04:09.057480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.806 [2024-11-20 16:04:09.057494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.807 [2024-11-20 16:04:09.057524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.057984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.057998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.058028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.807 [2024-11-20 16:04:09.058057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.807 [2024-11-20 16:04:09.058087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.807 [2024-11-20 16:04:09.058124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.807 [2024-11-20 16:04:09.058162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.807 [2024-11-20 16:04:09.058209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.807 [2024-11-20 16:04:09.058237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.807 [2024-11-20 16:04:09.058267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.807 [2024-11-20 16:04:09.058296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.807 [2024-11-20 16:04:09.058342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.807 [2024-11-20 16:04:09.058413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.058444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.807 [2024-11-20 16:04:09.058474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.807 [2024-11-20 16:04:09.058490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.808 [2024-11-20 16:04:09.058662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.058981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.058997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.059011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.059026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.059040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.059063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.059078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.059093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.059107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.059122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.059136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.059151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.059165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.059180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.059194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.059210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.059228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.059244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.808 [2024-11-20 16:04:09.059257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.808 [2024-11-20 16:04:09.059272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.809 [2024-11-20 16:04:09.059982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.059997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1035d30 is same with the state(6) to be set 00:16:25.809 [2024-11-20 16:04:09.060020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.809 [2024-11-20 16:04:09.060032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.809 [2024-11-20 16:04:09.060053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66368 len:8 PRP1 0x0 PRP2 0x0 00:16:25.809 [2024-11-20 16:04:09.060067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.060133] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:25.809 [2024-11-20 16:04:09.060200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.809 [2024-11-20 16:04:09.060223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.060240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.809 [2024-11-20 16:04:09.060254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.060268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.809 [2024-11-20 16:04:09.060282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.060297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.809 [2024-11-20 16:04:09.060311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.809 [2024-11-20 16:04:09.060325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:16:25.809 [2024-11-20 16:04:09.064246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:25.809 [2024-11-20 16:04:09.064300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9b710 (9): Bad file descriptor 00:16:25.809 [2024-11-20 16:04:09.093949] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:16:25.809 7785.00 IOPS, 30.41 MiB/s [2024-11-20T16:04:24.059Z] 8196.33 IOPS, 32.02 MiB/s [2024-11-20T16:04:24.060Z] 8351.25 IOPS, 32.62 MiB/s [2024-11-20T16:04:24.060Z] [2024-11-20 16:04:12.690858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.810 [2024-11-20 16:04:12.690962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.810 [2024-11-20 16:04:12.691057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.810 [2024-11-20 16:04:12.691087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.810 [2024-11-20 16:04:12.691115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.810 [2024-11-20 16:04:12.691143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.810 [2024-11-20 16:04:12.691174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.810 [2024-11-20 16:04:12.691202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.810 [2024-11-20 16:04:12.691230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.810 [2024-11-20 16:04:12.691753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.810 [2024-11-20 16:04:12.691817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.810 [2024-11-20 16:04:12.691844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.810 [2024-11-20 16:04:12.691880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.810 [2024-11-20 16:04:12.691895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.691920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.691936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.691949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.691964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.691977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.691992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.692005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.692032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.692561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.692592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.692622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.692659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.692690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.692735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.692764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.811 [2024-11-20 16:04:12.692814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.811 [2024-11-20 16:04:12.692858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.811 [2024-11-20 16:04:12.692873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.692896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.692914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.692927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.692943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.692957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.692972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.692985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.693379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.693409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.693438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.693468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.693505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.693536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.693575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.693604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.812 [2024-11-20 16:04:12.693882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.693912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.693949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.693980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.693994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.694009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.694040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.694055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.694069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.694085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.694099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.694114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.694128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.694144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.694158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.694173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.694187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.694202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.694216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.694231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.694245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.812 [2024-11-20 16:04:12.694261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.812 [2024-11-20 16:04:12.694275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.813 [2024-11-20 16:04:12.694310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.813 [2024-11-20 16:04:12.694340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.813 [2024-11-20 16:04:12.694377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.813 [2024-11-20 16:04:12.694411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.813 [2024-11-20 16:04:12.694441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.813 [2024-11-20 16:04:12.694471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.813 [2024-11-20 16:04:12.694501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.813 [2024-11-20 16:04:12.694530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.813 [2024-11-20 16:04:12.694559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.813 [2024-11-20 16:04:12.694589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.813 [2024-11-20 16:04:12.694619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103a370 is same with the state(6) to be set 00:16:25.813 [2024-11-20 16:04:12.694651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.694662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.694673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69008 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.694687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.694712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.694723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69400 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.694736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.694773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.694784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69408 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.694798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.694822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.694832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69416 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.694857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.694883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.694895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69424 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.694908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.694933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.694944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69432 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.694957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.694971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.694981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.694992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69440 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.695005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.695029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.695040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69448 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.695053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.695077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.695087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69456 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.695101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.695125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.695135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69464 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.695156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.695186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.695197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69472 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.695211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.695235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.695245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69480 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.695259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.695283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.695301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69488 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.695314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.695338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.695349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69496 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.695362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.695386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.695397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69504 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.695411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.695434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.695445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69512 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.695458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.813 [2024-11-20 16:04:12.695482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.813 [2024-11-20 16:04:12.695492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69520 len:8 PRP1 0x0 PRP2 0x0 00:16:25.813 [2024-11-20 16:04:12.695506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695567] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:16:25.813 [2024-11-20 16:04:12.695634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.813 [2024-11-20 16:04:12.695657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.813 [2024-11-20 16:04:12.695687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.813 [2024-11-20 16:04:12.695722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.813 [2024-11-20 16:04:12.695759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.813 [2024-11-20 16:04:12.695773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:16:25.813 [2024-11-20 16:04:12.695830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9b710 (9): Bad file descriptor 00:16:25.813 [2024-11-20 16:04:12.699665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:16:25.813 [2024-11-20 16:04:12.721624] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:16:25.813 8354.20 IOPS, 32.63 MiB/s [2024-11-20T16:04:24.063Z] 8414.17 IOPS, 32.87 MiB/s [2024-11-20T16:04:24.063Z] 8472.43 IOPS, 33.10 MiB/s [2024-11-20T16:04:24.064Z] 8463.62 IOPS, 33.06 MiB/s [2024-11-20T16:04:24.064Z] 8458.22 IOPS, 33.04 MiB/s [2024-11-20T16:04:24.064Z] [2024-11-20 16:04:17.264970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.265078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.265111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.265141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.265172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.265202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.265286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.265321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.265351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.265872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.265902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.265933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.265963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.265978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.265992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.814 [2024-11-20 16:04:17.266525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.266555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:127520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.266586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.266616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.266646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.266685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.266716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.814 [2024-11-20 16:04:17.266732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.814 [2024-11-20 16:04:17.266747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.266762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.266777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.266793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.266818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.266852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.266871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.266887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.266902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.266918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.266932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.266956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.266972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.266988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.815 [2024-11-20 16:04:17.267908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.267985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.267999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.268015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.268029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.268044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.268059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.268075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.268089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.268105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.268119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.268135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.268156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.268173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.268188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.268203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:25.815 [2024-11-20 16:04:17.268218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.815 [2024-11-20 16:04:17.268234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.816 [2024-11-20 16:04:17.268249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.816 [2024-11-20 16:04:17.268280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.816 [2024-11-20 16:04:17.268310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.816 [2024-11-20 16:04:17.268340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.816 [2024-11-20 16:04:17.268371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.816 [2024-11-20 16:04:17.268401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:25.816 [2024-11-20 16:04:17.268431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10367e0 is same with the state(6) to be set 00:16:25.816 [2024-11-20 16:04:17.268470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.268482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.268494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127760 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.268508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.268533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.268544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128216 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.268565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.268591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.268602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128224 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.268616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.268640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.268651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128232 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.268664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.268689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.268700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128240 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.268714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.268739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.268749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128248 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.268763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.268795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.268805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128256 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.268831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.268856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.268867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128264 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.268881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.268911] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.268922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128272 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.268936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.268950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.268968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.268980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128280 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.268993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128288 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128296 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128304 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128312 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128320 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128328 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128336 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128344 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128352 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128360 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128368 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128376 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128384 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:25.816 [2024-11-20 16:04:17.269700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:25.816 [2024-11-20 16:04:17.269711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128392 len:8 PRP1 0x0 PRP2 0x0 00:16:25.816 [2024-11-20 16:04:17.269725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269793] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:16:25.816 [2024-11-20 16:04:17.269864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.816 [2024-11-20 16:04:17.269898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.816 [2024-11-20 16:04:17.269930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.816 [2024-11-20 16:04:17.269959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.816 [2024-11-20 16:04:17.269973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.817 [2024-11-20 16:04:17.269987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.817 [2024-11-20 16:04:17.270001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:16:25.817 [2024-11-20 16:04:17.270062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9b710 (9): Bad file descriptor 00:16:25.817 [2024-11-20 16:04:17.273896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:16:25.817 [2024-11-20 16:04:17.295749] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:16:25.817 8309.10 IOPS, 32.46 MiB/s [2024-11-20T16:04:24.067Z] 7942.09 IOPS, 31.02 MiB/s [2024-11-20T16:04:24.067Z] 7867.58 IOPS, 30.73 MiB/s [2024-11-20T16:04:24.067Z] 7932.54 IOPS, 30.99 MiB/s [2024-11-20T16:04:24.067Z] 7973.36 IOPS, 31.15 MiB/s [2024-11-20T16:04:24.067Z] 8005.00 IOPS, 31.27 MiB/s 00:16:25.817 Latency(us) 00:16:25.817 [2024-11-20T16:04:24.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.817 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:25.817 Verification LBA range: start 0x0 length 0x4000 00:16:25.817 NVMe0n1 : 15.01 8006.91 31.28 206.07 0.00 15551.64 636.74 25618.62 00:16:25.817 [2024-11-20T16:04:24.067Z] =================================================================================================================== 00:16:25.817 [2024-11-20T16:04:24.067Z] Total : 8006.91 31.28 206.07 0.00 15551.64 636.74 25618.62 00:16:25.817 Received shutdown signal, test time was about 15.000000 seconds 00:16:25.817 00:16:25.817 Latency(us) 00:16:25.817 [2024-11-20T16:04:24.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.817 [2024-11-20T16:04:24.067Z] =================================================================================================================== 00:16:25.817 [2024-11-20T16:04:24.067Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:25.817 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:25.817 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:16:25.817 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:16:25.817 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75899 00:16:25.817 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:25.817 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75899 /var/tmp/bdevperf.sock 00:16:25.817 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 75899 ']' 00:16:25.817 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.817 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.817 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.817 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.817 16:04:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:26.140 16:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.140 16:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:16:26.140 16:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:26.399 [2024-11-20 16:04:24.589630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:26.399 16:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:16:26.659 [2024-11-20 16:04:24.897976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:16:26.917 16:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:27.176 NVMe0n1 00:16:27.176 16:04:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:27.435 00:16:27.435 16:04:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:16:28.002 00:16:28.002 16:04:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:28.002 16:04:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:16:28.261 16:04:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:28.520 16:04:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:16:31.808 16:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:31.808 16:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:16:31.808 16:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75979 00:16:31.808 16:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:31.808 16:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75979 00:16:33.185 { 00:16:33.185 "results": [ 00:16:33.185 { 00:16:33.185 "job": "NVMe0n1", 00:16:33.185 "core_mask": "0x1", 00:16:33.185 "workload": "verify", 00:16:33.185 "status": "finished", 00:16:33.185 "verify_range": { 00:16:33.185 "start": 0, 00:16:33.185 "length": 16384 00:16:33.185 }, 00:16:33.185 "queue_depth": 128, 00:16:33.185 "io_size": 4096, 00:16:33.185 "runtime": 1.025054, 00:16:33.185 "iops": 4649.511147705389, 00:16:33.185 "mibps": 18.162152920724175, 00:16:33.185 "io_failed": 0, 00:16:33.185 "io_timeout": 0, 00:16:33.185 "avg_latency_us": 27321.534127341394, 00:16:33.185 "min_latency_us": 4021.5272727272727, 00:16:33.185 "max_latency_us": 30027.403636363637 00:16:33.185 } 00:16:33.185 ], 00:16:33.185 "core_count": 1 00:16:33.185 } 00:16:33.185 16:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:33.185 [2024-11-20 16:04:23.248087] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:33.185 [2024-11-20 16:04:23.249036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75899 ] 00:16:33.185 [2024-11-20 16:04:23.402252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.185 [2024-11-20 16:04:23.451362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.185 [2024-11-20 16:04:23.509830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:33.185 [2024-11-20 16:04:26.545532] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:16:33.185 [2024-11-20 16:04:26.545647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.185 [2024-11-20 16:04:26.545674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.185 [2024-11-20 16:04:26.545694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.185 [2024-11-20 16:04:26.545709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.185 [2024-11-20 16:04:26.545724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.185 [2024-11-20 16:04:26.545739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.185 [2024-11-20 16:04:26.545762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.185 [2024-11-20 16:04:26.545777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.185 [2024-11-20 16:04:26.545792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:16:33.185 [2024-11-20 16:04:26.545885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:16:33.185 [2024-11-20 16:04:26.545918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b21710 (9): Bad file descriptor 00:16:33.185 [2024-11-20 16:04:26.550810] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:16:33.185 Running I/O for 1 seconds... 00:16:33.185 4619.00 IOPS, 18.04 MiB/s 00:16:33.185 Latency(us) 00:16:33.185 [2024-11-20T16:04:31.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.185 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:33.185 Verification LBA range: start 0x0 length 0x4000 00:16:33.185 NVMe0n1 : 1.03 4649.51 18.16 0.00 0.00 27321.53 4021.53 30027.40 00:16:33.185 [2024-11-20T16:04:31.435Z] =================================================================================================================== 00:16:33.185 [2024-11-20T16:04:31.435Z] Total : 4649.51 18.16 0.00 0.00 27321.53 4021.53 30027.40 00:16:33.185 16:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:33.185 16:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:16:33.185 16:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:33.444 16:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:33.444 16:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:16:34.011 16:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:34.269 16:04:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75899 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75899 ']' 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75899 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75899 00:16:37.556 killing process with pid 75899 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75899' 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75899 00:16:37.556 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75899 00:16:37.815 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:37.815 16:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:38.073 rmmod nvme_tcp 00:16:38.073 rmmod nvme_fabrics 00:16:38.073 rmmod nvme_keyring 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 75642 ']' 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 75642 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 75642 ']' 00:16:38.073 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 75642 00:16:38.074 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:16:38.074 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.074 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75642 00:16:38.074 killing process with pid 75642 00:16:38.074 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:38.074 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:38.074 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75642' 00:16:38.074 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 75642 00:16:38.074 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 75642 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:38.332 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:16:38.591 00:16:38.591 real 0m33.148s 00:16:38.591 user 2m1.553s 00:16:38.591 sys 0m7.912s 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:38.591 ************************************ 00:16:38.591 END TEST nvmf_failover 00:16:38.591 ************************************ 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.591 ************************************ 00:16:38.591 START TEST nvmf_host_discovery 00:16:38.591 ************************************ 00:16:38.591 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:38.850 * Looking for test storage... 00:16:38.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:16:38.850 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:38.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.851 --rc genhtml_branch_coverage=1 00:16:38.851 --rc genhtml_function_coverage=1 00:16:38.851 --rc genhtml_legend=1 00:16:38.851 --rc geninfo_all_blocks=1 00:16:38.851 --rc geninfo_unexecuted_blocks=1 00:16:38.851 00:16:38.851 ' 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:38.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.851 --rc genhtml_branch_coverage=1 00:16:38.851 --rc genhtml_function_coverage=1 00:16:38.851 --rc genhtml_legend=1 00:16:38.851 --rc geninfo_all_blocks=1 00:16:38.851 --rc geninfo_unexecuted_blocks=1 00:16:38.851 00:16:38.851 ' 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:38.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.851 --rc genhtml_branch_coverage=1 00:16:38.851 --rc genhtml_function_coverage=1 00:16:38.851 --rc genhtml_legend=1 00:16:38.851 --rc geninfo_all_blocks=1 00:16:38.851 --rc geninfo_unexecuted_blocks=1 00:16:38.851 00:16:38.851 ' 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:38.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.851 --rc genhtml_branch_coverage=1 00:16:38.851 --rc genhtml_function_coverage=1 00:16:38.851 --rc genhtml_legend=1 00:16:38.851 --rc geninfo_all_blocks=1 00:16:38.851 --rc geninfo_unexecuted_blocks=1 00:16:38.851 00:16:38.851 ' 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.851 16:04:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.851 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:38.851 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:38.852 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:38.852 Cannot find device "nvmf_init_br" 00:16:38.852 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:38.852 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:38.852 Cannot find device "nvmf_init_br2" 00:16:38.852 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:38.852 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:38.852 Cannot find device "nvmf_tgt_br" 00:16:38.852 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:16:38.852 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:38.852 Cannot find device "nvmf_tgt_br2" 00:16:38.852 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:16:38.852 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:38.852 Cannot find device "nvmf_init_br" 00:16:38.852 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:16:38.852 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:39.110 Cannot find device "nvmf_init_br2" 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:39.110 Cannot find device "nvmf_tgt_br" 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:39.110 Cannot find device "nvmf_tgt_br2" 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:39.110 Cannot find device "nvmf_br" 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:39.110 Cannot find device "nvmf_init_if" 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:39.110 Cannot find device "nvmf_init_if2" 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:39.110 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:39.110 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:39.110 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:39.368 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:39.368 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:39.368 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:39.368 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:39.368 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:39.368 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:39.368 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:39.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:39.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:16:39.368 00:16:39.368 --- 10.0.0.3 ping statistics --- 00:16:39.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.368 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:39.368 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:39.368 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:39.368 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:16:39.368 00:16:39.368 --- 10.0.0.4 ping statistics --- 00:16:39.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.368 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:39.368 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:39.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:39.368 00:16:39.369 --- 10.0.0.1 ping statistics --- 00:16:39.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.369 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:39.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:16:39.369 00:16:39.369 --- 10.0.0.2 ping statistics --- 00:16:39.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.369 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=76312 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 76312 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76312 ']' 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.369 16:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:39.369 [2024-11-20 16:04:37.486021] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:39.369 [2024-11-20 16:04:37.486122] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.627 [2024-11-20 16:04:37.633436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.627 [2024-11-20 16:04:37.696255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.627 [2024-11-20 16:04:37.696330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.627 [2024-11-20 16:04:37.696358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.627 [2024-11-20 16:04:37.696365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.627 [2024-11-20 16:04:37.696372] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.627 [2024-11-20 16:04:37.696828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.627 [2024-11-20 16:04:37.752438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.562 [2024-11-20 16:04:38.537740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.562 [2024-11-20 16:04:38.545922] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.562 null0 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.562 null1 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76344 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76344 /tmp/host.sock 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 76344 ']' 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.562 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.562 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.562 [2024-11-20 16:04:38.642558] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:40.562 [2024-11-20 16:04:38.642666] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76344 ] 00:16:40.562 [2024-11-20 16:04:38.797610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.821 [2024-11-20 16:04:38.869421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.821 [2024-11-20 16:04:38.929006] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:40.821 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.821 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:40.821 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:40.821 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:40.821 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.821 16:04:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:40.821 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.080 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.081 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.339 [2024-11-20 16:04:39.386214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:41.339 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.340 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:41.340 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:41.340 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:41.340 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.598 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:16:41.598 16:04:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:16:41.857 [2024-11-20 16:04:40.020975] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:41.857 [2024-11-20 16:04:40.021042] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:41.857 [2024-11-20 16:04:40.021078] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:41.857 [2024-11-20 16:04:40.027013] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:41.857 [2024-11-20 16:04:40.081419] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:41.857 [2024-11-20 16:04:40.082799] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xed1e60:1 started. 00:16:41.857 [2024-11-20 16:04:40.084880] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:41.857 [2024-11-20 16:04:40.084928] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:41.857 [2024-11-20 16:04:40.089595] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xed1e60 was disconnected and freed. delete nvme_qpair. 00:16:42.463 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:42.463 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:42.463 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:42.463 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:42.463 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.463 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:42.463 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.463 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:42.463 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:42.463 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.754 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.754 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:42.754 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:42.754 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:42.754 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:42.755 [2024-11-20 16:04:40.873320] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xeaa4a0:1 started. 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:42.755 [2024-11-20 16:04:40.880313] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xeaa4a0 was disconnected and freed. delete nvme_qpair. 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.755 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.755 [2024-11-20 16:04:40.987717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:42.755 [2024-11-20 16:04:40.988565] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:42.755 [2024-11-20 16:04:40.988602] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:42.756 [2024-11-20 16:04:40.994545] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:42.756 16:04:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:43.014 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.014 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.014 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:43.015 [2024-11-20 16:04:41.054986] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:16:43.015 [2024-11-20 16:04:41.055035] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:43.015 [2024-11-20 16:04:41.055046] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:43.015 [2024-11-20 16:04:41.055053] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.015 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.016 [2024-11-20 16:04:41.217080] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:16:43.016 [2024-11-20 16:04:41.217117] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:43.016 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:43.016 [2024-11-20 16:04:41.221615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.016 [2024-11-20 16:04:41.221652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.016 [2024-11-20 16:04:41.221678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.016 [2024-11-20 16:04:41.221688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.016 [2024-11-20 16:04:41.221698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.017 [2024-11-20 16:04:41.221707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.017 [2024-11-20 16:04:41.221718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:43.017 [2024-11-20 16:04:41.221727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:43.017 [2024-11-20 16:04:41.221737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeae230 is same with the state(6) to be set 00:16:43.017 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:43.017 [2024-11-20 16:04:41.223081] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:16:43.017 [2024-11-20 16:04:41.223112] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:43.017 [2024-11-20 16:04:41.223222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeae230 (9): Bad file descriptor 00:16:43.017 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:43.017 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:43.017 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.017 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.017 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:43.017 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:43.017 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:43.017 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.276 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:43.277 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.536 16:04:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.471 [2024-11-20 16:04:42.641941] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:44.471 [2024-11-20 16:04:42.642231] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:44.471 [2024-11-20 16:04:42.642267] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:44.471 [2024-11-20 16:04:42.647973] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:16:44.471 [2024-11-20 16:04:42.706315] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:16:44.471 [2024-11-20 16:04:42.707103] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xed3990:1 started. 00:16:44.471 [2024-11-20 16:04:42.708720] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:44.471 [2024-11-20 16:04:42.708760] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:16:44.471 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.471 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:44.471 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:44.471 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:44.471 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:44.471 [2024-11-20 16:04:42.710772] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xed3990 was disconnected and freed. delete nvme_qpair. 00:16:44.471 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.471 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:44.471 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.472 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:44.472 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.472 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.731 request: 00:16:44.731 { 00:16:44.731 "name": "nvme", 00:16:44.731 "trtype": "tcp", 00:16:44.731 "traddr": "10.0.0.3", 00:16:44.731 "adrfam": "ipv4", 00:16:44.731 "trsvcid": "8009", 00:16:44.731 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:44.731 "wait_for_attach": true, 00:16:44.731 "method": "bdev_nvme_start_discovery", 00:16:44.731 "req_id": 1 00:16:44.731 } 00:16:44.731 Got JSON-RPC error response 00:16:44.731 response: 00:16:44.731 { 00:16:44.731 "code": -17, 00:16:44.731 "message": "File exists" 00:16:44.731 } 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:44.731 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.732 request: 00:16:44.732 { 00:16:44.732 "name": "nvme_second", 00:16:44.732 "trtype": "tcp", 00:16:44.732 "traddr": "10.0.0.3", 00:16:44.732 "adrfam": "ipv4", 00:16:44.732 "trsvcid": "8009", 00:16:44.732 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:44.732 "wait_for_attach": true, 00:16:44.732 "method": "bdev_nvme_start_discovery", 00:16:44.732 "req_id": 1 00:16:44.732 } 00:16:44.732 Got JSON-RPC error response 00:16:44.732 response: 00:16:44.732 { 00:16:44.732 "code": -17, 00:16:44.732 "message": "File exists" 00:16:44.732 } 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.732 16:04:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.107 [2024-11-20 16:04:43.965354] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:46.107 [2024-11-20 16:04:43.965435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed3370 with addr=10.0.0.3, port=8010 00:16:46.107 [2024-11-20 16:04:43.965462] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:46.107 [2024-11-20 16:04:43.965473] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:46.107 [2024-11-20 16:04:43.965484] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:47.043 [2024-11-20 16:04:44.965338] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:47.043 [2024-11-20 16:04:44.965412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed3370 with addr=10.0.0.3, port=8010 00:16:47.043 [2024-11-20 16:04:44.965437] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:47.043 [2024-11-20 16:04:44.965448] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:47.043 [2024-11-20 16:04:44.965458] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:16:47.980 [2024-11-20 16:04:45.965168] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:16:47.980 request: 00:16:47.980 { 00:16:47.980 "name": "nvme_second", 00:16:47.980 "trtype": "tcp", 00:16:47.980 "traddr": "10.0.0.3", 00:16:47.980 "adrfam": "ipv4", 00:16:47.980 "trsvcid": "8010", 00:16:47.980 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:47.980 "wait_for_attach": false, 00:16:47.980 "attach_timeout_ms": 3000, 00:16:47.980 "method": "bdev_nvme_start_discovery", 00:16:47.980 "req_id": 1 00:16:47.980 } 00:16:47.980 Got JSON-RPC error response 00:16:47.980 response: 00:16:47.980 { 00:16:47.980 "code": -110, 00:16:47.980 "message": "Connection timed out" 00:16:47.980 } 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:47.980 16:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76344 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:47.980 rmmod nvme_tcp 00:16:47.980 rmmod nvme_fabrics 00:16:47.980 rmmod nvme_keyring 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 76312 ']' 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 76312 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 76312 ']' 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 76312 00:16:47.980 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76312 00:16:48.239 killing process with pid 76312 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76312' 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 76312 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 76312 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:48.239 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:16:48.497 00:16:48.497 real 0m9.911s 00:16:48.497 user 0m18.284s 00:16:48.497 sys 0m2.023s 00:16:48.497 ************************************ 00:16:48.497 END TEST nvmf_host_discovery 00:16:48.497 ************************************ 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.497 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.757 ************************************ 00:16:48.757 START TEST nvmf_host_multipath_status 00:16:48.757 ************************************ 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:48.757 * Looking for test storage... 00:16:48.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:48.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.757 --rc genhtml_branch_coverage=1 00:16:48.757 --rc genhtml_function_coverage=1 00:16:48.757 --rc genhtml_legend=1 00:16:48.757 --rc geninfo_all_blocks=1 00:16:48.757 --rc geninfo_unexecuted_blocks=1 00:16:48.757 00:16:48.757 ' 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:48.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.757 --rc genhtml_branch_coverage=1 00:16:48.757 --rc genhtml_function_coverage=1 00:16:48.757 --rc genhtml_legend=1 00:16:48.757 --rc geninfo_all_blocks=1 00:16:48.757 --rc geninfo_unexecuted_blocks=1 00:16:48.757 00:16:48.757 ' 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:48.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.757 --rc genhtml_branch_coverage=1 00:16:48.757 --rc genhtml_function_coverage=1 00:16:48.757 --rc genhtml_legend=1 00:16:48.757 --rc geninfo_all_blocks=1 00:16:48.757 --rc geninfo_unexecuted_blocks=1 00:16:48.757 00:16:48.757 ' 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:48.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:48.757 --rc genhtml_branch_coverage=1 00:16:48.757 --rc genhtml_function_coverage=1 00:16:48.757 --rc genhtml_legend=1 00:16:48.757 --rc geninfo_all_blocks=1 00:16:48.757 --rc geninfo_unexecuted_blocks=1 00:16:48.757 00:16:48.757 ' 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.757 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:48.758 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:48.758 Cannot find device "nvmf_init_br" 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:48.758 16:04:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:49.017 Cannot find device "nvmf_init_br2" 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:49.017 Cannot find device "nvmf_tgt_br" 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:49.017 Cannot find device "nvmf_tgt_br2" 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:49.017 Cannot find device "nvmf_init_br" 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:49.017 Cannot find device "nvmf_init_br2" 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:49.017 Cannot find device "nvmf_tgt_br" 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:49.017 Cannot find device "nvmf_tgt_br2" 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:49.017 Cannot find device "nvmf_br" 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:49.017 Cannot find device "nvmf_init_if" 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:49.017 Cannot find device "nvmf_init_if2" 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:49.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:49.017 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:49.017 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:49.275 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:49.275 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:16:49.275 00:16:49.275 --- 10.0.0.3 ping statistics --- 00:16:49.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.275 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:49.275 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:49.275 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:16:49.275 00:16:49.275 --- 10.0.0.4 ping statistics --- 00:16:49.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.275 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:49.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:49.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:16:49.275 00:16:49.275 --- 10.0.0.1 ping statistics --- 00:16:49.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.275 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:49.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:49.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:16:49.275 00:16:49.275 --- 10.0.0.2 ping statistics --- 00:16:49.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:49.275 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76846 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76846 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76846 ']' 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.275 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:49.275 [2024-11-20 16:04:47.434865] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:16:49.275 [2024-11-20 16:04:47.434967] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.533 [2024-11-20 16:04:47.591798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:49.533 [2024-11-20 16:04:47.649037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:49.533 [2024-11-20 16:04:47.649106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:49.533 [2024-11-20 16:04:47.649120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:49.533 [2024-11-20 16:04:47.649131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:49.533 [2024-11-20 16:04:47.649140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:49.533 [2024-11-20 16:04:47.650408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.533 [2024-11-20 16:04:47.650426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.533 [2024-11-20 16:04:47.710283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:49.791 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.791 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:49.791 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:49.791 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.791 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:49.791 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.791 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76846 00:16:49.791 16:04:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:50.050 [2024-11-20 16:04:48.116788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:50.050 16:04:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:50.307 Malloc0 00:16:50.307 16:04:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:50.564 16:04:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:51.128 16:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:51.128 [2024-11-20 16:04:49.322903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:51.128 16:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:16:51.385 [2024-11-20 16:04:49.587023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:16:51.385 16:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76900 00:16:51.385 16:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.385 16:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:51.385 16:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76900 /var/tmp/bdevperf.sock 00:16:51.385 16:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 76900 ']' 00:16:51.385 16:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.385 16:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.385 16:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.386 16:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.386 16:04:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:52.760 16:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.760 16:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:16:52.760 16:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:52.760 16:04:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:53.018 Nvme0n1 00:16:53.276 16:04:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:53.534 Nvme0n1 00:16:53.534 16:04:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:53.534 16:04:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:55.437 16:04:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:55.437 16:04:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:55.695 16:04:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:56.263 16:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:57.198 16:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:57.198 16:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:57.198 16:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.198 16:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:57.456 16:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.456 16:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:57.456 16:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.456 16:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:57.716 16:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:57.716 16:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:57.716 16:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.716 16:04:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:57.975 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:57.975 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:57.975 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:57.975 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:58.234 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.234 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:58.234 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:58.234 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.494 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.494 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:58.494 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:58.494 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:58.752 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:58.752 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:58.752 16:04:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:59.345 16:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:59.345 16:04:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:00.287 16:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:00.287 16:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:00.287 16:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:00.287 16:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:00.855 16:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:00.855 16:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:00.855 16:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:00.855 16:04:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.115 16:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.115 16:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:01.115 16:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:01.115 16:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.373 16:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.374 16:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:01.374 16:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.374 16:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:01.633 16:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.633 16:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:01.633 16:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.633 16:04:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:01.891 16:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:01.891 16:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:01.891 16:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:01.891 16:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:02.149 16:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:02.149 16:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:02.149 16:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:02.406 16:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:02.664 16:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:04.112 16:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:04.112 16:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:04.112 16:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.112 16:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:04.112 16:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:04.112 16:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:04.112 16:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.112 16:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:04.370 16:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:04.370 16:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:04.370 16:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:04.370 16:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:04.628 16:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:04.628 16:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:04.628 16:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:04.628 16:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.194 16:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.194 16:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:05.194 16:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.194 16:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:05.452 16:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.452 16:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:05.452 16:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:05.452 16:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:05.711 16:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:05.711 16:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:05.711 16:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:05.969 16:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:06.228 16:05:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:07.608 16:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:07.608 16:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:07.608 16:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:07.608 16:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:07.608 16:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:07.608 16:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:07.608 16:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:07.608 16:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:07.867 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:07.867 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:07.867 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:07.867 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:08.125 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.125 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:08.125 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.125 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:08.385 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.385 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:08.385 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:08.385 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.956 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:08.956 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:08.956 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:08.956 16:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:08.956 16:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:08.956 16:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:08.956 16:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:09.533 16:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:09.533 16:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:10.910 16:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:10.910 16:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:10.910 16:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:10.910 16:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:10.910 16:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:10.910 16:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:10.910 16:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:10.910 16:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.169 16:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:11.169 16:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:11.169 16:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:11.169 16:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.734 16:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:11.734 16:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:11.734 16:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.734 16:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:11.992 16:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:11.992 16:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:11.992 16:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:11.992 16:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:12.250 16:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:12.250 16:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:12.250 16:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:12.250 16:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:12.509 16:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:12.509 16:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:12.509 16:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:17:12.768 16:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:13.026 16:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:13.964 16:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:13.964 16:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:13.964 16:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:13.964 16:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:14.223 16:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:14.223 16:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:14.481 16:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.481 16:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:14.739 16:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:14.739 16:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:14.739 16:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.739 16:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:14.997 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:14.997 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:14.997 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:14.997 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:15.255 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:15.255 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:15.255 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.255 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:15.513 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:15.513 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:15.513 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:15.513 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:15.771 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:15.771 16:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:16.029 16:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:16.030 16:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:17:16.287 16:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:16.546 16:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:17.480 16:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:17.480 16:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:17.480 16:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:17.480 16:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:18.047 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:18.047 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:18.047 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:18.048 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:18.306 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:18.306 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:18.306 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:18.306 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:18.564 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:18.564 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:18.564 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:18.564 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:18.823 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:18.823 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:18.823 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:18.823 16:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.082 16:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:19.082 16:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:19.082 16:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:19.082 16:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:19.340 16:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:19.340 16:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:19.340 16:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:19.598 16:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:19.856 16:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:20.790 16:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:20.790 16:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:20.790 16:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:20.790 16:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:21.049 16:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:21.049 16:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:21.049 16:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:21.049 16:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:21.614 16:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:21.614 16:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:21.614 16:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:21.614 16:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:21.872 16:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:21.872 16:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:21.872 16:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:21.872 16:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:22.129 16:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:22.129 16:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:22.129 16:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:22.129 16:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:22.387 16:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:22.387 16:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:22.387 16:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:22.387 16:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:22.645 16:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:22.645 16:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:17:22.645 16:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:22.903 16:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:17:23.162 16:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:17:24.097 16:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:17:24.097 16:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:24.097 16:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:24.097 16:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.372 16:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.372 16:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:24.372 16:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.372 16:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:24.640 16:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.640 16:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:24.640 16:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.640 16:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:24.898 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:24.898 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:24.898 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:24.898 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:25.156 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:25.156 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:25.156 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:25.156 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.414 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:25.414 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:25.414 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:25.414 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:25.672 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:25.672 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:17:25.672 16:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:25.930 16:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:26.495 16:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:17:27.428 16:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:17:27.428 16:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:27.428 16:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.428 16:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:27.686 16:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:27.686 16:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:27.686 16:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.686 16:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:27.944 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:27.944 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:27.944 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:27.944 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:28.203 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:28.203 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:28.203 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.203 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:28.462 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:28.462 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:28.462 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.462 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:28.724 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:28.724 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:28.724 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:28.724 16:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:28.982 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:28.982 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76900 00:17:28.983 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76900 ']' 00:17:28.983 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76900 00:17:28.983 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:28.983 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.983 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76900 00:17:29.257 killing process with pid 76900 00:17:29.257 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:29.257 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:29.257 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76900' 00:17:29.257 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76900 00:17:29.257 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76900 00:17:29.257 { 00:17:29.257 "results": [ 00:17:29.257 { 00:17:29.257 "job": "Nvme0n1", 00:17:29.257 "core_mask": "0x4", 00:17:29.257 "workload": "verify", 00:17:29.257 "status": "terminated", 00:17:29.257 "verify_range": { 00:17:29.257 "start": 0, 00:17:29.257 "length": 16384 00:17:29.257 }, 00:17:29.257 "queue_depth": 128, 00:17:29.257 "io_size": 4096, 00:17:29.257 "runtime": 35.487019, 00:17:29.257 "iops": 8317.858425921884, 00:17:29.257 "mibps": 32.49163447625736, 00:17:29.257 "io_failed": 0, 00:17:29.257 "io_timeout": 0, 00:17:29.257 "avg_latency_us": 15355.367902884442, 00:17:29.257 "min_latency_us": 934.6327272727273, 00:17:29.257 "max_latency_us": 4087539.898181818 00:17:29.257 } 00:17:29.257 ], 00:17:29.257 "core_count": 1 00:17:29.257 } 00:17:29.257 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76900 00:17:29.257 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:29.257 [2024-11-20 16:04:49.662284] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:17:29.257 [2024-11-20 16:04:49.662401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76900 ] 00:17:29.257 [2024-11-20 16:04:49.814425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.257 [2024-11-20 16:04:49.880397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.257 [2024-11-20 16:04:49.940571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:29.257 Running I/O for 90 seconds... 00:17:29.257 6804.00 IOPS, 26.58 MiB/s [2024-11-20T16:05:27.507Z] 6730.50 IOPS, 26.29 MiB/s [2024-11-20T16:05:27.507Z] 6748.33 IOPS, 26.36 MiB/s [2024-11-20T16:05:27.507Z] 6793.00 IOPS, 26.54 MiB/s [2024-11-20T16:05:27.507Z] 6839.20 IOPS, 26.72 MiB/s [2024-11-20T16:05:27.507Z] 6924.00 IOPS, 27.05 MiB/s [2024-11-20T16:05:27.507Z] 7264.57 IOPS, 28.38 MiB/s [2024-11-20T16:05:27.507Z] 7499.25 IOPS, 29.29 MiB/s [2024-11-20T16:05:27.507Z] 7696.78 IOPS, 30.07 MiB/s [2024-11-20T16:05:27.507Z] 7886.10 IOPS, 30.81 MiB/s [2024-11-20T16:05:27.507Z] 8020.27 IOPS, 31.33 MiB/s [2024-11-20T16:05:27.507Z] 8119.92 IOPS, 31.72 MiB/s [2024-11-20T16:05:27.507Z] 8070.23 IOPS, 31.52 MiB/s [2024-11-20T16:05:27.507Z] 8136.64 IOPS, 31.78 MiB/s [2024-11-20T16:05:27.507Z] 8197.87 IOPS, 32.02 MiB/s [2024-11-20T16:05:27.507Z] [2024-11-20 16:05:07.446666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.257 [2024-11-20 16:05:07.446746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:29.257 [2024-11-20 16:05:07.446800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.257 [2024-11-20 16:05:07.446819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:29.257 [2024-11-20 16:05:07.446888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.257 [2024-11-20 16:05:07.446905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:29.257 [2024-11-20 16:05:07.446927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.257 [2024-11-20 16:05:07.446943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:29.257 [2024-11-20 16:05:07.446965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.257 [2024-11-20 16:05:07.446980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:29.257 [2024-11-20 16:05:07.447002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.257 [2024-11-20 16:05:07.447017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:29.257 [2024-11-20 16:05:07.447039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.257 [2024-11-20 16:05:07.447054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:29.257 [2024-11-20 16:05:07.447078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.447094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.447130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.447200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.447237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.447288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.447323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.447358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.447394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.447431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.447485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.447541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.447579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.447617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.447655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.447705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.447744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.447782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.447820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.447859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.447942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.447965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.447997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.448034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.448079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.448127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.448164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.448208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.448253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.448291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.448328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.448365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.448402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.448454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.258 [2024-11-20 16:05:07.448490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.448526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.448563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.448599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.448641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.448680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.258 [2024-11-20 16:05:07.448701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.258 [2024-11-20 16:05:07.448730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.448752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.448767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.448788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.448813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.448834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.448849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.448897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.448916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.448938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.448954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.448976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.448992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.449029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.449068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.449104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.449150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.449187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.449232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.449271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.449330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.449367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.449405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.449446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.259 [2024-11-20 16:05:07.449483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.449525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.449564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.449616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.449652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.449688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.449732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.449770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.449806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.449856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.449911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.449948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.449970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.449985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.450007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.450023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.450045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.450060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.450082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.259 [2024-11-20 16:05:07.450103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:29.259 [2024-11-20 16:05:07.450124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.450494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.450530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.450566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.450602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.450637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.450677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.450720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.450757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.450970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.450986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.451023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.451060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.260 [2024-11-20 16:05:07.451097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.260 [2024-11-20 16:05:07.451614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:29.260 [2024-11-20 16:05:07.451634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.451649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.451669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.451698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.452953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.452983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.453950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.453965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.261 [2024-11-20 16:05:07.454344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.454386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.454447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.454484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.454521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.454558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.454595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.454632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.454669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.454718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.454763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.261 [2024-11-20 16:05:07.454800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.261 [2024-11-20 16:05:07.454862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.454882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.454903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.454918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.454939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.454954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.454975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.454990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.455026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.262 [2024-11-20 16:05:07.455081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.262 [2024-11-20 16:05:07.455133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.262 [2024-11-20 16:05:07.455170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.262 [2024-11-20 16:05:07.455207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.262 [2024-11-20 16:05:07.455244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.262 [2024-11-20 16:05:07.455281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.262 [2024-11-20 16:05:07.455318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.262 [2024-11-20 16:05:07.455644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.455688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.455724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.455760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.455796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.455832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.455885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.455921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.455957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.455977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.455992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.456018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.456033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.456054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.456069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.456090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.456115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.456137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.456153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.456174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.456189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.456210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.456225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.456245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.262 [2024-11-20 16:05:07.456265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:29.262 [2024-11-20 16:05:07.456286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.456302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.456323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.456338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.456359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.456374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.456394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.456409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.456430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.456445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.456466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.456481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.456502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.456517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.456538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.456560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.456583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.466989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.467969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.467991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.468006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.468028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.468043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.468064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.263 [2024-11-20 16:05:07.468080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.468112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.468128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.468150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.468166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.468187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.468203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.468225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.468240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.468262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.468277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.468299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.468314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:29.263 [2024-11-20 16:05:07.468336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.263 [2024-11-20 16:05:07.468351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.468389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.468427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.468464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.468501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.468538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.468583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.468624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.468662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.468698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.468740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.468778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.468853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.468893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.468931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.468970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.468992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.469008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.469046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.469095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.469140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.469178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.469217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.469266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.469335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.469374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.264 [2024-11-20 16:05:07.469413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.469452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.469491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.469530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.469590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.469656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.469712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.469777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.469847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.469905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.469959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.469991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.470013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.470045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.470076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.470111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.264 [2024-11-20 16:05:07.470133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:29.264 [2024-11-20 16:05:07.470165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.470187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.470241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.470295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.470349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.470415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.470473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.470527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.470582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.470636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.470691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.470753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.470807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.470878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.470933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.470964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.470987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.471041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.471106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.471161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.471215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.471276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.471334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.471387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.471442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.471496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.471550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.471605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.471660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.471713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.471777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.471850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.471905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.471959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.471991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.472013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.474737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.474782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.474846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.265 [2024-11-20 16:05:07.474874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.474908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.474931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.474963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.474986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.475018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.475040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.475072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 16:05:07.475094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:29.265 [2024-11-20 16:05:07.475125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.475956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.475990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.476013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.476067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.476122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.476177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.476231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 16:05:07.476284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.266 [2024-11-20 16:05:07.476339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.266 [2024-11-20 16:05:07.476393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.266 [2024-11-20 16:05:07.476447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.266 [2024-11-20 16:05:07.476501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.266 [2024-11-20 16:05:07.476557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.266 [2024-11-20 16:05:07.476621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.266 [2024-11-20 16:05:07.476687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.266 [2024-11-20 16:05:07.476750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.266 [2024-11-20 16:05:07.476804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:29.266 [2024-11-20 16:05:07.476857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.266 [2024-11-20 16:05:07.476881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.476912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.476935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.476968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.476991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.477046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.477101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.477155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.477209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.477263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.477341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.477408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.477462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.477516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.477575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.477629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.477683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.477737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.477801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.477892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.477946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.477978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.478000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.478054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.478119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.478174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.478236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.478290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.478345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.478400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.478466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.478526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.478581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.267 [2024-11-20 16:05:07.478635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.478689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.478743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.478826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.478886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.478941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.478982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.479004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.479035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 16:05:07.479066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:29.267 [2024-11-20 16:05:07.479098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 16:05:07.479121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 16:05:07.479175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 16:05:07.479229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 16:05:07.479283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 16:05:07.479337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 16:05:07.479391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 16:05:07.479444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 16:05:07.479508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 16:05:07.479564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.479617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.479671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.479735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.479789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.479861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.479915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.479946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.479968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.268 [2024-11-20 16:05:07.480936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.480968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 16:05:07.480991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.481023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 16:05:07.481045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.481076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 16:05:07.481099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.268 [2024-11-20 16:05:07.481130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.481729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.269 [2024-11-20 16:05:07.481771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.269 [2024-11-20 16:05:07.481808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.269 [2024-11-20 16:05:07.481857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.269 [2024-11-20 16:05:07.481908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.481930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.269 [2024-11-20 16:05:07.481946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.269 [2024-11-20 16:05:07.484114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.269 [2024-11-20 16:05:07.484187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.269 [2024-11-20 16:05:07.484240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 16:05:07.484920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 16:05:07.484942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.484957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.484979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.484994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.485032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.485069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.485106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.485143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.485181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.485976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.485992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.486013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.486029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.486051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.486066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.486088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.486104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.486126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.270 [2024-11-20 16:05:07.486142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.486165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.486188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.486211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.486227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.486249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.486265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.486287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.486303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.486324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.486340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.486362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 16:05:07.486377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 16:05:07.486399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.486415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.486452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.486490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.486528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.486566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.486603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.486653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.486692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.486730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.486768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.486818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.486859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.486897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.486935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.486973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.486995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.487011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.487049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.487087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.487132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.487172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.487209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.487247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.487284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.487322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.487360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 16:05:07.487398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.487436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.487474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.487511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.487549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.271 [2024-11-20 16:05:07.487591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:29.271 [2024-11-20 16:05:07.487621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.487637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.487659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.487675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.487697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.487713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.487734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.487750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.487771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.487787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.487821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.487840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.487862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.487878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.487900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.487923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.487945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.487960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.487982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.487999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.488036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.488074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.488126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.488164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.488201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.488238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.488276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.488313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.488350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 16:05:07.488977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.488999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.489014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.489036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.489063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.489086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.489102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:29.272 [2024-11-20 16:05:07.489123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.272 [2024-11-20 16:05:07.489139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:07.489546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:07.489573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:29.273 8072.00 IOPS, 31.53 MiB/s [2024-11-20T16:05:27.523Z] 7597.18 IOPS, 29.68 MiB/s [2024-11-20T16:05:27.523Z] 7175.11 IOPS, 28.03 MiB/s [2024-11-20T16:05:27.523Z] 6797.47 IOPS, 26.55 MiB/s [2024-11-20T16:05:27.523Z] 6594.00 IOPS, 25.76 MiB/s [2024-11-20T16:05:27.523Z] 6732.19 IOPS, 26.30 MiB/s [2024-11-20T16:05:27.523Z] 6842.27 IOPS, 26.73 MiB/s [2024-11-20T16:05:27.523Z] 6987.61 IOPS, 27.30 MiB/s [2024-11-20T16:05:27.523Z] 7234.88 IOPS, 28.26 MiB/s [2024-11-20T16:05:27.523Z] 7423.20 IOPS, 29.00 MiB/s [2024-11-20T16:05:27.523Z] 7618.31 IOPS, 29.76 MiB/s [2024-11-20T16:05:27.523Z] 7674.22 IOPS, 29.98 MiB/s [2024-11-20T16:05:27.523Z] 7727.29 IOPS, 30.18 MiB/s [2024-11-20T16:05:27.523Z] 7775.59 IOPS, 30.37 MiB/s [2024-11-20T16:05:27.523Z] 7877.20 IOPS, 30.77 MiB/s [2024-11-20T16:05:27.523Z] 8026.35 IOPS, 31.35 MiB/s [2024-11-20T16:05:27.523Z] 8168.78 IOPS, 31.91 MiB/s [2024-11-20T16:05:27.523Z] [2024-11-20 16:05:24.460688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.273 [2024-11-20 16:05:24.460763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.460799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.460833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.460859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.460875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.460896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.460912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.460934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.460950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.460972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.460987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.461964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.461980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.462001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.462017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.462039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.462054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.462076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.462101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.463724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.463755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.463784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.273 [2024-11-20 16:05:24.463802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:29.273 [2024-11-20 16:05:24.463839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.463857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.463879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.463895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.463917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.463933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.463955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.463971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.463993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.274 [2024-11-20 16:05:24.464577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.274 [2024-11-20 16:05:24.464614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.274 [2024-11-20 16:05:24.464781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.274 [2024-11-20 16:05:24.464831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.464972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.464994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.465010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.465032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.274 [2024-11-20 16:05:24.465064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.465085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.274 [2024-11-20 16:05:24.465101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.465123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.274 [2024-11-20 16:05:24.465138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.465160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.274 [2024-11-20 16:05:24.465176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.465197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.465213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.465245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.465262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.465284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.465312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.465335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.465352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.465375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.465391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.465413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.274 [2024-11-20 16:05:24.465428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:29.274 [2024-11-20 16:05:24.465455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.465472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.465509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.465547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.465585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.465622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.465659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.465697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.465747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.465787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.465840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.465879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.465917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.465954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.465976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.465992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.466014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.466030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.466052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.466067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.466094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.466110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.466132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.466148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.466170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.466185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.466207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.466231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.466254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.466270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.466292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.466308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.466330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.466346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.469509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.469558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.469596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.469634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.469672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.469709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.469747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.469785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.469837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.469891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.469929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.275 [2024-11-20 16:05:24.469966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.469988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.470003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.470025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.275 [2024-11-20 16:05:24.470041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:29.275 [2024-11-20 16:05:24.470062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.470078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.470115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.470532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.470608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.470721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.470759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.470976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.470998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.471014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.471035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.471051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.471073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.471089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.471111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.276 [2024-11-20 16:05:24.471126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.471148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.471163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.471185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:62552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.471201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.471222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.471238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.471260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.471283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.471306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.471321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.471351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.471367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.471390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.276 [2024-11-20 16:05:24.471405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:29.276 [2024-11-20 16:05:24.471427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.471443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.471480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.471518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.471556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.471593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.471630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.471675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.471712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.471749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.471794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.471845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.471883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.471920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.471942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.471958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.472000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.472022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.472045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.472061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.472083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.472105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.472129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.472144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.475578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.475611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.475640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.475658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.475680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.475696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.475732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.475749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.475772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.475787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.475822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.475841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.475863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.475879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.475901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.475917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.475939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.475954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.475976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.475992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.476014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.476029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.476051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.476066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.476089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.476105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.476126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.476142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.476163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.476179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.476202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.476234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.476257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.277 [2024-11-20 16:05:24.476273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.476294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.476310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.476332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.476347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.476368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.476383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.476405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.277 [2024-11-20 16:05:24.476420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:29.277 [2024-11-20 16:05:24.476442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.476457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.476494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.476531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.476567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.476603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.476640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.476690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.476728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.476764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.476801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.476856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.476898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.476935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.476972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.476993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.477119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.477157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.477203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.477240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.477330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.477367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.477405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.477577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.477615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.477652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.477701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.278 [2024-11-20 16:05:24.477738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.477968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.278 [2024-11-20 16:05:24.477983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:29.278 [2024-11-20 16:05:24.478005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.478020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.478041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.478057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.478078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.478093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.478115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.478130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.478151 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.279 ] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.478177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.478200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.478216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.478237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.478253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.480182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.480227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.480265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.480303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.480341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.480378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.480415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.480452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.480489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.480538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.480577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.480615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.480652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.480689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.480727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.480787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.480843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.480882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.480919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.480957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.480979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.480994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.481016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.481031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.481063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.481080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.481101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.481117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.481139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.481154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.481176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.481191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.481213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.481228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.481250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.481265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.481297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.481315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.481338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.279 [2024-11-20 16:05:24.481353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.481375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.279 [2024-11-20 16:05:24.481390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:29.279 [2024-11-20 16:05:24.481412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.481428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.481453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.481469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.481492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.481507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.481538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.481554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.481576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.481592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.482646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.482675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.482703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:62944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.482720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.482742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.482758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.482780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.482796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.482831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.482850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.482872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.482888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.482910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.482925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.482947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.482962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.482984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.482999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.483021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.483036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.483057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.483084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.483108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.483123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.483145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.483160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.483182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.483198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.484907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.484937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.484980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.485000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.485040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.485077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.485114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.485152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.485189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.485228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.485277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.485331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.485370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.485407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.485444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.280 [2024-11-20 16:05:24.485481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.280 [2024-11-20 16:05:24.485517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:29.280 [2024-11-20 16:05:24.485539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.281 [2024-11-20 16:05:24.485555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.485576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.281 [2024-11-20 16:05:24.485591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.485621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.281 [2024-11-20 16:05:24.485637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.485658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.281 [2024-11-20 16:05:24.485674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.485695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.281 [2024-11-20 16:05:24.485711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.485738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.281 [2024-11-20 16:05:24.485762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.485784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.281 [2024-11-20 16:05:24.485800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.485835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.281 [2024-11-20 16:05:24.485853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.485875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.281 [2024-11-20 16:05:24.485890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.485912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.281 [2024-11-20 16:05:24.485927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.485948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.281 [2024-11-20 16:05:24.485964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.485985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.281 [2024-11-20 16:05:24.486000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.486022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.281 [2024-11-20 16:05:24.486037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.486059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.281 [2024-11-20 16:05:24.486075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.486096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.281 [2024-11-20 16:05:24.486111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.486133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.281 [2024-11-20 16:05:24.486148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.486170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:29.281 [2024-11-20 16:05:24.486185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.486207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.281 [2024-11-20 16:05:24.486222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.486253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.281 [2024-11-20 16:05:24.486269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:29.281 [2024-11-20 16:05:24.486291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.281 [2024-11-20 16:05:24.486307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:29.281 8263.27 IOPS, 32.28 MiB/s [2024-11-20T16:05:27.531Z] 8287.53 IOPS, 32.37 MiB/s [2024-11-20T16:05:27.531Z] 8310.17 IOPS, 32.46 MiB/s [2024-11-20T16:05:27.531Z] Received shutdown signal, test time was about 35.487825 seconds 00:17:29.281 00:17:29.281 Latency(us) 00:17:29.281 [2024-11-20T16:05:27.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.281 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:29.281 Verification LBA range: start 0x0 length 0x4000 00:17:29.281 Nvme0n1 : 35.49 8317.86 32.49 0.00 0.00 15355.37 934.63 4087539.90 00:17:29.281 [2024-11-20T16:05:27.531Z] =================================================================================================================== 00:17:29.281 [2024-11-20T16:05:27.531Z] Total : 8317.86 32.49 0.00 0.00 15355.37 934.63 4087539.90 00:17:29.540 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:17:29.540 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:29.540 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:17:29.540 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:29.540 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:17:29.798 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:29.798 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:17:29.798 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:29.798 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:29.798 rmmod nvme_tcp 00:17:29.798 rmmod nvme_fabrics 00:17:29.798 rmmod nvme_keyring 00:17:29.798 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:29.798 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:17:29.798 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:17:29.798 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76846 ']' 00:17:29.799 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76846 00:17:29.799 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 76846 ']' 00:17:29.799 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 76846 00:17:29.799 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:17:29.799 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.799 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76846 00:17:29.799 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.799 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.799 killing process with pid 76846 00:17:29.799 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76846' 00:17:29.799 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 76846 00:17:29.799 16:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 76846 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:30.057 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:17:30.316 00:17:30.316 real 0m41.678s 00:17:30.316 user 2m15.104s 00:17:30.316 sys 0m12.351s 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:30.316 ************************************ 00:17:30.316 END TEST nvmf_host_multipath_status 00:17:30.316 ************************************ 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.316 ************************************ 00:17:30.316 START TEST nvmf_discovery_remove_ifc 00:17:30.316 ************************************ 00:17:30.316 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:30.576 * Looking for test storage... 00:17:30.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:30.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.576 --rc genhtml_branch_coverage=1 00:17:30.576 --rc genhtml_function_coverage=1 00:17:30.576 --rc genhtml_legend=1 00:17:30.576 --rc geninfo_all_blocks=1 00:17:30.576 --rc geninfo_unexecuted_blocks=1 00:17:30.576 00:17:30.576 ' 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:30.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.576 --rc genhtml_branch_coverage=1 00:17:30.576 --rc genhtml_function_coverage=1 00:17:30.576 --rc genhtml_legend=1 00:17:30.576 --rc geninfo_all_blocks=1 00:17:30.576 --rc geninfo_unexecuted_blocks=1 00:17:30.576 00:17:30.576 ' 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:30.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.576 --rc genhtml_branch_coverage=1 00:17:30.576 --rc genhtml_function_coverage=1 00:17:30.576 --rc genhtml_legend=1 00:17:30.576 --rc geninfo_all_blocks=1 00:17:30.576 --rc geninfo_unexecuted_blocks=1 00:17:30.576 00:17:30.576 ' 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:30.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.576 --rc genhtml_branch_coverage=1 00:17:30.576 --rc genhtml_function_coverage=1 00:17:30.576 --rc genhtml_legend=1 00:17:30.576 --rc geninfo_all_blocks=1 00:17:30.576 --rc geninfo_unexecuted_blocks=1 00:17:30.576 00:17:30.576 ' 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.576 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:30.577 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:30.577 Cannot find device "nvmf_init_br" 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:30.577 Cannot find device "nvmf_init_br2" 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:30.577 Cannot find device "nvmf_tgt_br" 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:30.577 Cannot find device "nvmf_tgt_br2" 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:17:30.577 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:30.578 Cannot find device "nvmf_init_br" 00:17:30.578 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:17:30.578 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:30.578 Cannot find device "nvmf_init_br2" 00:17:30.578 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:17:30.578 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:30.578 Cannot find device "nvmf_tgt_br" 00:17:30.578 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:17:30.578 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:30.578 Cannot find device "nvmf_tgt_br2" 00:17:30.578 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:17:30.578 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:30.578 Cannot find device "nvmf_br" 00:17:30.578 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:17:30.578 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:30.837 Cannot find device "nvmf_init_if" 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:30.837 Cannot find device "nvmf_init_if2" 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:30.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:30.837 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:30.837 16:05:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:30.837 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:30.837 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:30.837 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:30.838 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:30.838 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:17:30.838 00:17:30.838 --- 10.0.0.3 ping statistics --- 00:17:30.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.838 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:30.838 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:30.838 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:17:30.838 00:17:30.838 --- 10.0.0.4 ping statistics --- 00:17:30.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.838 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:30.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:17:30.838 00:17:30.838 --- 10.0.0.1 ping statistics --- 00:17:30.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.838 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:30.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:17:30.838 00:17:30.838 --- 10.0.0.2 ping statistics --- 00:17:30.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.838 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77790 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77790 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77790 ']' 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.838 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:31.096 [2024-11-20 16:05:29.113105] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:17:31.096 [2024-11-20 16:05:29.113196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.096 [2024-11-20 16:05:29.259339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.096 [2024-11-20 16:05:29.317212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.096 [2024-11-20 16:05:29.317276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.096 [2024-11-20 16:05:29.317287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.096 [2024-11-20 16:05:29.317322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.096 [2024-11-20 16:05:29.317332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.096 [2024-11-20 16:05:29.317717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.355 [2024-11-20 16:05:29.372620] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:31.355 [2024-11-20 16:05:29.495745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.355 [2024-11-20 16:05:29.503940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:17:31.355 null0 00:17:31.355 [2024-11-20 16:05:29.535779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77820 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77820 /tmp/host.sock 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 77820 ']' 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.355 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.355 16:05:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:31.614 [2024-11-20 16:05:29.618922] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:17:31.614 [2024-11-20 16:05:29.619016] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77820 ] 00:17:31.614 [2024-11-20 16:05:29.762941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.614 [2024-11-20 16:05:29.826693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:32.551 [2024-11-20 16:05:30.684255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.551 16:05:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:33.926 [2024-11-20 16:05:31.739071] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:33.926 [2024-11-20 16:05:31.739119] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:33.926 [2024-11-20 16:05:31.739143] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:33.926 [2024-11-20 16:05:31.745118] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:17:33.926 [2024-11-20 16:05:31.799533] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:17:33.926 [2024-11-20 16:05:31.800623] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c3ffc0:1 started. 00:17:33.926 [2024-11-20 16:05:31.802483] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:33.926 [2024-11-20 16:05:31.802550] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:33.926 [2024-11-20 16:05:31.802580] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:33.926 [2024-11-20 16:05:31.802598] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:17:33.926 [2024-11-20 16:05:31.802625] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.926 [2024-11-20 16:05:31.807665] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c3ffc0 was disconnected and freed. delete nvme_qpair. 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:33.926 16:05:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:34.859 16:05:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:34.859 16:05:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:34.859 16:05:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.859 16:05:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:34.859 16:05:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:34.859 16:05:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:34.860 16:05:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:34.860 16:05:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.860 16:05:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:34.860 16:05:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:35.794 16:05:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:35.795 16:05:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:35.795 16:05:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:35.795 16:05:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:35.795 16:05:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.795 16:05:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:35.795 16:05:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:35.795 16:05:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.053 16:05:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:36.053 16:05:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:36.988 16:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:36.988 16:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:36.988 16:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.988 16:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:36.988 16:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:36.988 16:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:36.988 16:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:36.988 16:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.988 16:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:36.988 16:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:37.923 16:05:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:37.923 16:05:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:37.923 16:05:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:37.923 16:05:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:37.923 16:05:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.923 16:05:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:37.923 16:05:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:37.923 16:05:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.182 16:05:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:38.182 16:05:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:39.116 16:05:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:39.116 16:05:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:39.116 16:05:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:39.116 16:05:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.116 16:05:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:39.116 16:05:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:39.116 16:05:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:39.116 16:05:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.116 16:05:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:39.116 16:05:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:39.116 [2024-11-20 16:05:37.240074] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:39.116 [2024-11-20 16:05:37.240148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.116 [2024-11-20 16:05:37.240166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.116 [2024-11-20 16:05:37.240179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.116 [2024-11-20 16:05:37.240189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.116 [2024-11-20 16:05:37.240199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.116 [2024-11-20 16:05:37.240208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.116 [2024-11-20 16:05:37.240219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.116 [2024-11-20 16:05:37.240228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.116 [2024-11-20 16:05:37.240239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:39.116 [2024-11-20 16:05:37.240248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:39.116 [2024-11-20 16:05:37.240258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c240 is same with the state(6) to be set 00:17:39.116 [2024-11-20 16:05:37.250068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1c240 (9): Bad file descriptor 00:17:39.116 [2024-11-20 16:05:37.260091] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:17:39.116 [2024-11-20 16:05:37.260114] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:17:39.116 [2024-11-20 16:05:37.260121] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:39.116 [2024-11-20 16:05:37.260127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:39.116 [2024-11-20 16:05:37.260169] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:40.051 16:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:40.051 16:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:40.051 16:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:40.051 16:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:40.051 16:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.051 16:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:40.051 16:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:40.310 [2024-11-20 16:05:38.300914] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:40.310 [2024-11-20 16:05:38.301283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1c240 with addr=10.0.0.3, port=4420 00:17:40.310 [2024-11-20 16:05:38.301337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1c240 is same with the state(6) to be set 00:17:40.310 [2024-11-20 16:05:38.301392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c1c240 (9): Bad file descriptor 00:17:40.310 [2024-11-20 16:05:38.301960] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:17:40.310 [2024-11-20 16:05:38.302008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:40.310 [2024-11-20 16:05:38.302022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:40.310 [2024-11-20 16:05:38.302036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:40.310 [2024-11-20 16:05:38.302047] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:40.310 [2024-11-20 16:05:38.302055] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:40.310 [2024-11-20 16:05:38.302062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:40.310 [2024-11-20 16:05:38.302074] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:17:40.310 [2024-11-20 16:05:38.302082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:40.310 16:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.310 16:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:40.310 16:05:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:41.244 [2024-11-20 16:05:39.302126] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:17:41.244 [2024-11-20 16:05:39.302190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:17:41.244 [2024-11-20 16:05:39.302227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:17:41.244 [2024-11-20 16:05:39.302239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:17:41.244 [2024-11-20 16:05:39.302249] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:17:41.244 [2024-11-20 16:05:39.302259] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:17:41.244 [2024-11-20 16:05:39.302266] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:17:41.244 [2024-11-20 16:05:39.302272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:17:41.244 [2024-11-20 16:05:39.302312] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:17:41.244 [2024-11-20 16:05:39.302376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.244 [2024-11-20 16:05:39.302392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.244 [2024-11-20 16:05:39.302408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.244 [2024-11-20 16:05:39.302418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.244 [2024-11-20 16:05:39.302429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.244 [2024-11-20 16:05:39.302438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.244 [2024-11-20 16:05:39.302448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.244 [2024-11-20 16:05:39.302458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.244 [2024-11-20 16:05:39.302469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:41.244 [2024-11-20 16:05:39.302478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:41.244 [2024-11-20 16:05:39.302488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:17:41.244 [2024-11-20 16:05:39.302536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba7a20 (9): Bad file descriptor 00:17:41.244 [2024-11-20 16:05:39.303520] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:41.244 [2024-11-20 16:05:39.303538] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:41.244 16:05:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:42.618 16:05:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:42.618 16:05:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:42.618 16:05:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:42.618 16:05:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:42.618 16:05:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.618 16:05:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:42.618 16:05:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:42.618 16:05:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.618 16:05:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:42.618 16:05:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:43.185 [2024-11-20 16:05:41.313658] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:17:43.185 [2024-11-20 16:05:41.313967] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:17:43.185 [2024-11-20 16:05:41.314005] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:17:43.185 [2024-11-20 16:05:41.319694] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:17:43.185 [2024-11-20 16:05:41.374082] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:17:43.185 [2024-11-20 16:05:41.375097] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1bfaa60:1 started. 00:17:43.185 [2024-11-20 16:05:41.376472] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:43.185 [2024-11-20 16:05:41.376532] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:43.185 [2024-11-20 16:05:41.376556] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:43.185 [2024-11-20 16:05:41.376573] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:17:43.185 [2024-11-20 16:05:41.376582] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:17:43.185 [2024-11-20 16:05:41.382403] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1bfaa60 was disconnected and freed. delete nvme_qpair. 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77820 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77820 ']' 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77820 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77820 00:17:43.443 killing process with pid 77820 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77820' 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77820 00:17:43.443 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77820 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:43.702 rmmod nvme_tcp 00:17:43.702 rmmod nvme_fabrics 00:17:43.702 rmmod nvme_keyring 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77790 ']' 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77790 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 77790 ']' 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 77790 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.702 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77790 00:17:44.017 killing process with pid 77790 00:17:44.017 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:44.017 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:44.017 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77790' 00:17:44.017 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 77790 00:17:44.017 16:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 77790 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:44.017 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:17:44.292 00:17:44.292 real 0m13.921s 00:17:44.292 user 0m24.052s 00:17:44.292 sys 0m2.514s 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.292 ************************************ 00:17:44.292 END TEST nvmf_discovery_remove_ifc 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:44.292 ************************************ 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:44.292 ************************************ 00:17:44.292 START TEST nvmf_identify_kernel_target 00:17:44.292 ************************************ 00:17:44.292 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:44.552 * Looking for test storage... 00:17:44.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:44.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.552 --rc genhtml_branch_coverage=1 00:17:44.552 --rc genhtml_function_coverage=1 00:17:44.552 --rc genhtml_legend=1 00:17:44.552 --rc geninfo_all_blocks=1 00:17:44.552 --rc geninfo_unexecuted_blocks=1 00:17:44.552 00:17:44.552 ' 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:44.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.552 --rc genhtml_branch_coverage=1 00:17:44.552 --rc genhtml_function_coverage=1 00:17:44.552 --rc genhtml_legend=1 00:17:44.552 --rc geninfo_all_blocks=1 00:17:44.552 --rc geninfo_unexecuted_blocks=1 00:17:44.552 00:17:44.552 ' 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:44.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.552 --rc genhtml_branch_coverage=1 00:17:44.552 --rc genhtml_function_coverage=1 00:17:44.552 --rc genhtml_legend=1 00:17:44.552 --rc geninfo_all_blocks=1 00:17:44.552 --rc geninfo_unexecuted_blocks=1 00:17:44.552 00:17:44.552 ' 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:44.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.552 --rc genhtml_branch_coverage=1 00:17:44.552 --rc genhtml_function_coverage=1 00:17:44.552 --rc genhtml_legend=1 00:17:44.552 --rc geninfo_all_blocks=1 00:17:44.552 --rc geninfo_unexecuted_blocks=1 00:17:44.552 00:17:44.552 ' 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.552 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:44.553 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:44.553 Cannot find device "nvmf_init_br" 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:44.553 Cannot find device "nvmf_init_br2" 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:44.553 Cannot find device "nvmf_tgt_br" 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:44.553 Cannot find device "nvmf_tgt_br2" 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:44.553 Cannot find device "nvmf_init_br" 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:44.553 Cannot find device "nvmf_init_br2" 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:44.553 Cannot find device "nvmf_tgt_br" 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:44.553 Cannot find device "nvmf_tgt_br2" 00:17:44.553 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:17:44.554 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:44.812 Cannot find device "nvmf_br" 00:17:44.812 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:17:44.812 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:44.812 Cannot find device "nvmf_init_if" 00:17:44.812 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:17:44.812 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:44.812 Cannot find device "nvmf_init_if2" 00:17:44.812 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.813 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:44.813 16:05:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:44.813 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:44.813 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:44.813 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:44.813 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:44.813 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:44.813 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:44.813 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:44.813 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:44.813 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:44.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:44.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:17:44.813 00:17:44.813 --- 10.0.0.3 ping statistics --- 00:17:44.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.813 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:17:44.813 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:44.813 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:44.813 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:44.813 00:17:44.813 --- 10.0.0.4 ping statistics --- 00:17:44.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.813 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:44.813 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:45.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:45.072 00:17:45.072 --- 10.0.0.1 ping statistics --- 00:17:45.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.072 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:45.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:45.072 00:17:45.072 --- 10.0.0.2 ping statistics --- 00:17:45.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.072 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:45.072 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:45.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:45.330 Waiting for block devices as requested 00:17:45.330 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:45.589 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:45.589 No valid GPT data, bailing 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:45.589 No valid GPT data, bailing 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:45.589 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:45.847 No valid GPT data, bailing 00:17:45.847 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:45.847 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:45.848 No valid GPT data, bailing 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:17:45.848 16:05:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:45.848 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid=ca768c1a-78f6-4242-8009-85e76e7a8123 -a 10.0.0.1 -t tcp -s 4420 00:17:45.848 00:17:45.848 Discovery Log Number of Records 2, Generation counter 2 00:17:45.848 =====Discovery Log Entry 0====== 00:17:45.848 trtype: tcp 00:17:45.848 adrfam: ipv4 00:17:45.848 subtype: current discovery subsystem 00:17:45.848 treq: not specified, sq flow control disable supported 00:17:45.848 portid: 1 00:17:45.848 trsvcid: 4420 00:17:45.848 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:45.848 traddr: 10.0.0.1 00:17:45.848 eflags: none 00:17:45.848 sectype: none 00:17:45.848 =====Discovery Log Entry 1====== 00:17:45.848 trtype: tcp 00:17:45.848 adrfam: ipv4 00:17:45.848 subtype: nvme subsystem 00:17:45.848 treq: not specified, sq flow control disable supported 00:17:45.848 portid: 1 00:17:45.848 trsvcid: 4420 00:17:45.848 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:45.848 traddr: 10.0.0.1 00:17:45.848 eflags: none 00:17:45.848 sectype: none 00:17:45.848 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:45.848 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:46.107 ===================================================== 00:17:46.107 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:46.107 ===================================================== 00:17:46.107 Controller Capabilities/Features 00:17:46.107 ================================ 00:17:46.107 Vendor ID: 0000 00:17:46.107 Subsystem Vendor ID: 0000 00:17:46.107 Serial Number: ec49a2d5a41c690b6017 00:17:46.107 Model Number: Linux 00:17:46.107 Firmware Version: 6.8.9-20 00:17:46.107 Recommended Arb Burst: 0 00:17:46.107 IEEE OUI Identifier: 00 00 00 00:17:46.107 Multi-path I/O 00:17:46.107 May have multiple subsystem ports: No 00:17:46.107 May have multiple controllers: No 00:17:46.107 Associated with SR-IOV VF: No 00:17:46.107 Max Data Transfer Size: Unlimited 00:17:46.107 Max Number of Namespaces: 0 00:17:46.107 Max Number of I/O Queues: 1024 00:17:46.107 NVMe Specification Version (VS): 1.3 00:17:46.107 NVMe Specification Version (Identify): 1.3 00:17:46.107 Maximum Queue Entries: 1024 00:17:46.107 Contiguous Queues Required: No 00:17:46.107 Arbitration Mechanisms Supported 00:17:46.107 Weighted Round Robin: Not Supported 00:17:46.107 Vendor Specific: Not Supported 00:17:46.107 Reset Timeout: 7500 ms 00:17:46.107 Doorbell Stride: 4 bytes 00:17:46.107 NVM Subsystem Reset: Not Supported 00:17:46.107 Command Sets Supported 00:17:46.107 NVM Command Set: Supported 00:17:46.107 Boot Partition: Not Supported 00:17:46.107 Memory Page Size Minimum: 4096 bytes 00:17:46.107 Memory Page Size Maximum: 4096 bytes 00:17:46.107 Persistent Memory Region: Not Supported 00:17:46.107 Optional Asynchronous Events Supported 00:17:46.107 Namespace Attribute Notices: Not Supported 00:17:46.107 Firmware Activation Notices: Not Supported 00:17:46.107 ANA Change Notices: Not Supported 00:17:46.107 PLE Aggregate Log Change Notices: Not Supported 00:17:46.107 LBA Status Info Alert Notices: Not Supported 00:17:46.107 EGE Aggregate Log Change Notices: Not Supported 00:17:46.107 Normal NVM Subsystem Shutdown event: Not Supported 00:17:46.107 Zone Descriptor Change Notices: Not Supported 00:17:46.107 Discovery Log Change Notices: Supported 00:17:46.107 Controller Attributes 00:17:46.107 128-bit Host Identifier: Not Supported 00:17:46.107 Non-Operational Permissive Mode: Not Supported 00:17:46.107 NVM Sets: Not Supported 00:17:46.107 Read Recovery Levels: Not Supported 00:17:46.107 Endurance Groups: Not Supported 00:17:46.107 Predictable Latency Mode: Not Supported 00:17:46.107 Traffic Based Keep ALive: Not Supported 00:17:46.107 Namespace Granularity: Not Supported 00:17:46.107 SQ Associations: Not Supported 00:17:46.107 UUID List: Not Supported 00:17:46.107 Multi-Domain Subsystem: Not Supported 00:17:46.107 Fixed Capacity Management: Not Supported 00:17:46.107 Variable Capacity Management: Not Supported 00:17:46.107 Delete Endurance Group: Not Supported 00:17:46.107 Delete NVM Set: Not Supported 00:17:46.107 Extended LBA Formats Supported: Not Supported 00:17:46.107 Flexible Data Placement Supported: Not Supported 00:17:46.107 00:17:46.107 Controller Memory Buffer Support 00:17:46.107 ================================ 00:17:46.107 Supported: No 00:17:46.107 00:17:46.107 Persistent Memory Region Support 00:17:46.107 ================================ 00:17:46.107 Supported: No 00:17:46.107 00:17:46.107 Admin Command Set Attributes 00:17:46.107 ============================ 00:17:46.107 Security Send/Receive: Not Supported 00:17:46.107 Format NVM: Not Supported 00:17:46.107 Firmware Activate/Download: Not Supported 00:17:46.107 Namespace Management: Not Supported 00:17:46.107 Device Self-Test: Not Supported 00:17:46.107 Directives: Not Supported 00:17:46.107 NVMe-MI: Not Supported 00:17:46.107 Virtualization Management: Not Supported 00:17:46.108 Doorbell Buffer Config: Not Supported 00:17:46.108 Get LBA Status Capability: Not Supported 00:17:46.108 Command & Feature Lockdown Capability: Not Supported 00:17:46.108 Abort Command Limit: 1 00:17:46.108 Async Event Request Limit: 1 00:17:46.108 Number of Firmware Slots: N/A 00:17:46.108 Firmware Slot 1 Read-Only: N/A 00:17:46.108 Firmware Activation Without Reset: N/A 00:17:46.108 Multiple Update Detection Support: N/A 00:17:46.108 Firmware Update Granularity: No Information Provided 00:17:46.108 Per-Namespace SMART Log: No 00:17:46.108 Asymmetric Namespace Access Log Page: Not Supported 00:17:46.108 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:46.108 Command Effects Log Page: Not Supported 00:17:46.108 Get Log Page Extended Data: Supported 00:17:46.108 Telemetry Log Pages: Not Supported 00:17:46.108 Persistent Event Log Pages: Not Supported 00:17:46.108 Supported Log Pages Log Page: May Support 00:17:46.108 Commands Supported & Effects Log Page: Not Supported 00:17:46.108 Feature Identifiers & Effects Log Page:May Support 00:17:46.108 NVMe-MI Commands & Effects Log Page: May Support 00:17:46.108 Data Area 4 for Telemetry Log: Not Supported 00:17:46.108 Error Log Page Entries Supported: 1 00:17:46.108 Keep Alive: Not Supported 00:17:46.108 00:17:46.108 NVM Command Set Attributes 00:17:46.108 ========================== 00:17:46.108 Submission Queue Entry Size 00:17:46.108 Max: 1 00:17:46.108 Min: 1 00:17:46.108 Completion Queue Entry Size 00:17:46.108 Max: 1 00:17:46.108 Min: 1 00:17:46.108 Number of Namespaces: 0 00:17:46.108 Compare Command: Not Supported 00:17:46.108 Write Uncorrectable Command: Not Supported 00:17:46.108 Dataset Management Command: Not Supported 00:17:46.108 Write Zeroes Command: Not Supported 00:17:46.108 Set Features Save Field: Not Supported 00:17:46.108 Reservations: Not Supported 00:17:46.108 Timestamp: Not Supported 00:17:46.108 Copy: Not Supported 00:17:46.108 Volatile Write Cache: Not Present 00:17:46.108 Atomic Write Unit (Normal): 1 00:17:46.108 Atomic Write Unit (PFail): 1 00:17:46.108 Atomic Compare & Write Unit: 1 00:17:46.108 Fused Compare & Write: Not Supported 00:17:46.108 Scatter-Gather List 00:17:46.108 SGL Command Set: Supported 00:17:46.108 SGL Keyed: Not Supported 00:17:46.108 SGL Bit Bucket Descriptor: Not Supported 00:17:46.108 SGL Metadata Pointer: Not Supported 00:17:46.108 Oversized SGL: Not Supported 00:17:46.108 SGL Metadata Address: Not Supported 00:17:46.108 SGL Offset: Supported 00:17:46.108 Transport SGL Data Block: Not Supported 00:17:46.108 Replay Protected Memory Block: Not Supported 00:17:46.108 00:17:46.108 Firmware Slot Information 00:17:46.108 ========================= 00:17:46.108 Active slot: 0 00:17:46.108 00:17:46.108 00:17:46.108 Error Log 00:17:46.108 ========= 00:17:46.108 00:17:46.108 Active Namespaces 00:17:46.108 ================= 00:17:46.108 Discovery Log Page 00:17:46.108 ================== 00:17:46.108 Generation Counter: 2 00:17:46.108 Number of Records: 2 00:17:46.108 Record Format: 0 00:17:46.108 00:17:46.108 Discovery Log Entry 0 00:17:46.108 ---------------------- 00:17:46.108 Transport Type: 3 (TCP) 00:17:46.108 Address Family: 1 (IPv4) 00:17:46.108 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:46.108 Entry Flags: 00:17:46.108 Duplicate Returned Information: 0 00:17:46.108 Explicit Persistent Connection Support for Discovery: 0 00:17:46.108 Transport Requirements: 00:17:46.108 Secure Channel: Not Specified 00:17:46.108 Port ID: 1 (0x0001) 00:17:46.108 Controller ID: 65535 (0xffff) 00:17:46.108 Admin Max SQ Size: 32 00:17:46.108 Transport Service Identifier: 4420 00:17:46.108 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:46.108 Transport Address: 10.0.0.1 00:17:46.108 Discovery Log Entry 1 00:17:46.108 ---------------------- 00:17:46.108 Transport Type: 3 (TCP) 00:17:46.108 Address Family: 1 (IPv4) 00:17:46.108 Subsystem Type: 2 (NVM Subsystem) 00:17:46.108 Entry Flags: 00:17:46.108 Duplicate Returned Information: 0 00:17:46.108 Explicit Persistent Connection Support for Discovery: 0 00:17:46.108 Transport Requirements: 00:17:46.108 Secure Channel: Not Specified 00:17:46.108 Port ID: 1 (0x0001) 00:17:46.108 Controller ID: 65535 (0xffff) 00:17:46.108 Admin Max SQ Size: 32 00:17:46.108 Transport Service Identifier: 4420 00:17:46.108 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:46.108 Transport Address: 10.0.0.1 00:17:46.108 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:46.367 get_feature(0x01) failed 00:17:46.367 get_feature(0x02) failed 00:17:46.367 get_feature(0x04) failed 00:17:46.367 ===================================================== 00:17:46.367 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:46.367 ===================================================== 00:17:46.367 Controller Capabilities/Features 00:17:46.367 ================================ 00:17:46.367 Vendor ID: 0000 00:17:46.367 Subsystem Vendor ID: 0000 00:17:46.367 Serial Number: 302b90da4c7c45f3c5ca 00:17:46.367 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:46.367 Firmware Version: 6.8.9-20 00:17:46.367 Recommended Arb Burst: 6 00:17:46.367 IEEE OUI Identifier: 00 00 00 00:17:46.367 Multi-path I/O 00:17:46.367 May have multiple subsystem ports: Yes 00:17:46.367 May have multiple controllers: Yes 00:17:46.367 Associated with SR-IOV VF: No 00:17:46.367 Max Data Transfer Size: Unlimited 00:17:46.367 Max Number of Namespaces: 1024 00:17:46.367 Max Number of I/O Queues: 128 00:17:46.367 NVMe Specification Version (VS): 1.3 00:17:46.367 NVMe Specification Version (Identify): 1.3 00:17:46.367 Maximum Queue Entries: 1024 00:17:46.367 Contiguous Queues Required: No 00:17:46.367 Arbitration Mechanisms Supported 00:17:46.367 Weighted Round Robin: Not Supported 00:17:46.367 Vendor Specific: Not Supported 00:17:46.367 Reset Timeout: 7500 ms 00:17:46.367 Doorbell Stride: 4 bytes 00:17:46.367 NVM Subsystem Reset: Not Supported 00:17:46.367 Command Sets Supported 00:17:46.367 NVM Command Set: Supported 00:17:46.367 Boot Partition: Not Supported 00:17:46.367 Memory Page Size Minimum: 4096 bytes 00:17:46.367 Memory Page Size Maximum: 4096 bytes 00:17:46.367 Persistent Memory Region: Not Supported 00:17:46.367 Optional Asynchronous Events Supported 00:17:46.367 Namespace Attribute Notices: Supported 00:17:46.367 Firmware Activation Notices: Not Supported 00:17:46.368 ANA Change Notices: Supported 00:17:46.368 PLE Aggregate Log Change Notices: Not Supported 00:17:46.368 LBA Status Info Alert Notices: Not Supported 00:17:46.368 EGE Aggregate Log Change Notices: Not Supported 00:17:46.368 Normal NVM Subsystem Shutdown event: Not Supported 00:17:46.368 Zone Descriptor Change Notices: Not Supported 00:17:46.368 Discovery Log Change Notices: Not Supported 00:17:46.368 Controller Attributes 00:17:46.368 128-bit Host Identifier: Supported 00:17:46.368 Non-Operational Permissive Mode: Not Supported 00:17:46.368 NVM Sets: Not Supported 00:17:46.368 Read Recovery Levels: Not Supported 00:17:46.368 Endurance Groups: Not Supported 00:17:46.368 Predictable Latency Mode: Not Supported 00:17:46.368 Traffic Based Keep ALive: Supported 00:17:46.368 Namespace Granularity: Not Supported 00:17:46.368 SQ Associations: Not Supported 00:17:46.368 UUID List: Not Supported 00:17:46.368 Multi-Domain Subsystem: Not Supported 00:17:46.368 Fixed Capacity Management: Not Supported 00:17:46.368 Variable Capacity Management: Not Supported 00:17:46.368 Delete Endurance Group: Not Supported 00:17:46.368 Delete NVM Set: Not Supported 00:17:46.368 Extended LBA Formats Supported: Not Supported 00:17:46.368 Flexible Data Placement Supported: Not Supported 00:17:46.368 00:17:46.368 Controller Memory Buffer Support 00:17:46.368 ================================ 00:17:46.368 Supported: No 00:17:46.368 00:17:46.368 Persistent Memory Region Support 00:17:46.368 ================================ 00:17:46.368 Supported: No 00:17:46.368 00:17:46.368 Admin Command Set Attributes 00:17:46.368 ============================ 00:17:46.368 Security Send/Receive: Not Supported 00:17:46.368 Format NVM: Not Supported 00:17:46.368 Firmware Activate/Download: Not Supported 00:17:46.368 Namespace Management: Not Supported 00:17:46.368 Device Self-Test: Not Supported 00:17:46.368 Directives: Not Supported 00:17:46.368 NVMe-MI: Not Supported 00:17:46.368 Virtualization Management: Not Supported 00:17:46.368 Doorbell Buffer Config: Not Supported 00:17:46.368 Get LBA Status Capability: Not Supported 00:17:46.368 Command & Feature Lockdown Capability: Not Supported 00:17:46.368 Abort Command Limit: 4 00:17:46.368 Async Event Request Limit: 4 00:17:46.368 Number of Firmware Slots: N/A 00:17:46.368 Firmware Slot 1 Read-Only: N/A 00:17:46.368 Firmware Activation Without Reset: N/A 00:17:46.368 Multiple Update Detection Support: N/A 00:17:46.368 Firmware Update Granularity: No Information Provided 00:17:46.368 Per-Namespace SMART Log: Yes 00:17:46.368 Asymmetric Namespace Access Log Page: Supported 00:17:46.368 ANA Transition Time : 10 sec 00:17:46.368 00:17:46.368 Asymmetric Namespace Access Capabilities 00:17:46.368 ANA Optimized State : Supported 00:17:46.368 ANA Non-Optimized State : Supported 00:17:46.368 ANA Inaccessible State : Supported 00:17:46.368 ANA Persistent Loss State : Supported 00:17:46.368 ANA Change State : Supported 00:17:46.368 ANAGRPID is not changed : No 00:17:46.368 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:46.368 00:17:46.368 ANA Group Identifier Maximum : 128 00:17:46.368 Number of ANA Group Identifiers : 128 00:17:46.368 Max Number of Allowed Namespaces : 1024 00:17:46.368 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:46.368 Command Effects Log Page: Supported 00:17:46.368 Get Log Page Extended Data: Supported 00:17:46.368 Telemetry Log Pages: Not Supported 00:17:46.368 Persistent Event Log Pages: Not Supported 00:17:46.368 Supported Log Pages Log Page: May Support 00:17:46.368 Commands Supported & Effects Log Page: Not Supported 00:17:46.368 Feature Identifiers & Effects Log Page:May Support 00:17:46.368 NVMe-MI Commands & Effects Log Page: May Support 00:17:46.368 Data Area 4 for Telemetry Log: Not Supported 00:17:46.368 Error Log Page Entries Supported: 128 00:17:46.368 Keep Alive: Supported 00:17:46.368 Keep Alive Granularity: 1000 ms 00:17:46.368 00:17:46.368 NVM Command Set Attributes 00:17:46.368 ========================== 00:17:46.368 Submission Queue Entry Size 00:17:46.368 Max: 64 00:17:46.368 Min: 64 00:17:46.368 Completion Queue Entry Size 00:17:46.368 Max: 16 00:17:46.368 Min: 16 00:17:46.368 Number of Namespaces: 1024 00:17:46.368 Compare Command: Not Supported 00:17:46.368 Write Uncorrectable Command: Not Supported 00:17:46.368 Dataset Management Command: Supported 00:17:46.368 Write Zeroes Command: Supported 00:17:46.368 Set Features Save Field: Not Supported 00:17:46.368 Reservations: Not Supported 00:17:46.368 Timestamp: Not Supported 00:17:46.368 Copy: Not Supported 00:17:46.368 Volatile Write Cache: Present 00:17:46.368 Atomic Write Unit (Normal): 1 00:17:46.368 Atomic Write Unit (PFail): 1 00:17:46.368 Atomic Compare & Write Unit: 1 00:17:46.368 Fused Compare & Write: Not Supported 00:17:46.368 Scatter-Gather List 00:17:46.368 SGL Command Set: Supported 00:17:46.368 SGL Keyed: Not Supported 00:17:46.368 SGL Bit Bucket Descriptor: Not Supported 00:17:46.368 SGL Metadata Pointer: Not Supported 00:17:46.368 Oversized SGL: Not Supported 00:17:46.368 SGL Metadata Address: Not Supported 00:17:46.368 SGL Offset: Supported 00:17:46.368 Transport SGL Data Block: Not Supported 00:17:46.368 Replay Protected Memory Block: Not Supported 00:17:46.368 00:17:46.368 Firmware Slot Information 00:17:46.368 ========================= 00:17:46.368 Active slot: 0 00:17:46.368 00:17:46.368 Asymmetric Namespace Access 00:17:46.368 =========================== 00:17:46.368 Change Count : 0 00:17:46.368 Number of ANA Group Descriptors : 1 00:17:46.368 ANA Group Descriptor : 0 00:17:46.368 ANA Group ID : 1 00:17:46.368 Number of NSID Values : 1 00:17:46.368 Change Count : 0 00:17:46.368 ANA State : 1 00:17:46.368 Namespace Identifier : 1 00:17:46.368 00:17:46.368 Commands Supported and Effects 00:17:46.368 ============================== 00:17:46.368 Admin Commands 00:17:46.368 -------------- 00:17:46.368 Get Log Page (02h): Supported 00:17:46.368 Identify (06h): Supported 00:17:46.368 Abort (08h): Supported 00:17:46.368 Set Features (09h): Supported 00:17:46.368 Get Features (0Ah): Supported 00:17:46.368 Asynchronous Event Request (0Ch): Supported 00:17:46.368 Keep Alive (18h): Supported 00:17:46.368 I/O Commands 00:17:46.368 ------------ 00:17:46.368 Flush (00h): Supported 00:17:46.368 Write (01h): Supported LBA-Change 00:17:46.368 Read (02h): Supported 00:17:46.368 Write Zeroes (08h): Supported LBA-Change 00:17:46.368 Dataset Management (09h): Supported 00:17:46.368 00:17:46.368 Error Log 00:17:46.368 ========= 00:17:46.368 Entry: 0 00:17:46.368 Error Count: 0x3 00:17:46.368 Submission Queue Id: 0x0 00:17:46.368 Command Id: 0x5 00:17:46.368 Phase Bit: 0 00:17:46.368 Status Code: 0x2 00:17:46.368 Status Code Type: 0x0 00:17:46.368 Do Not Retry: 1 00:17:46.368 Error Location: 0x28 00:17:46.368 LBA: 0x0 00:17:46.368 Namespace: 0x0 00:17:46.368 Vendor Log Page: 0x0 00:17:46.368 ----------- 00:17:46.368 Entry: 1 00:17:46.368 Error Count: 0x2 00:17:46.368 Submission Queue Id: 0x0 00:17:46.368 Command Id: 0x5 00:17:46.368 Phase Bit: 0 00:17:46.368 Status Code: 0x2 00:17:46.369 Status Code Type: 0x0 00:17:46.369 Do Not Retry: 1 00:17:46.369 Error Location: 0x28 00:17:46.369 LBA: 0x0 00:17:46.369 Namespace: 0x0 00:17:46.369 Vendor Log Page: 0x0 00:17:46.369 ----------- 00:17:46.369 Entry: 2 00:17:46.369 Error Count: 0x1 00:17:46.369 Submission Queue Id: 0x0 00:17:46.369 Command Id: 0x4 00:17:46.369 Phase Bit: 0 00:17:46.369 Status Code: 0x2 00:17:46.369 Status Code Type: 0x0 00:17:46.369 Do Not Retry: 1 00:17:46.369 Error Location: 0x28 00:17:46.369 LBA: 0x0 00:17:46.369 Namespace: 0x0 00:17:46.369 Vendor Log Page: 0x0 00:17:46.369 00:17:46.369 Number of Queues 00:17:46.369 ================ 00:17:46.369 Number of I/O Submission Queues: 128 00:17:46.369 Number of I/O Completion Queues: 128 00:17:46.369 00:17:46.369 ZNS Specific Controller Data 00:17:46.369 ============================ 00:17:46.369 Zone Append Size Limit: 0 00:17:46.369 00:17:46.369 00:17:46.369 Active Namespaces 00:17:46.369 ================= 00:17:46.369 get_feature(0x05) failed 00:17:46.369 Namespace ID:1 00:17:46.369 Command Set Identifier: NVM (00h) 00:17:46.369 Deallocate: Supported 00:17:46.369 Deallocated/Unwritten Error: Not Supported 00:17:46.369 Deallocated Read Value: Unknown 00:17:46.369 Deallocate in Write Zeroes: Not Supported 00:17:46.369 Deallocated Guard Field: 0xFFFF 00:17:46.369 Flush: Supported 00:17:46.369 Reservation: Not Supported 00:17:46.369 Namespace Sharing Capabilities: Multiple Controllers 00:17:46.369 Size (in LBAs): 1310720 (5GiB) 00:17:46.369 Capacity (in LBAs): 1310720 (5GiB) 00:17:46.369 Utilization (in LBAs): 1310720 (5GiB) 00:17:46.369 UUID: 1a26fb9c-2d3a-48d9-a35a-4d941f3ab409 00:17:46.369 Thin Provisioning: Not Supported 00:17:46.369 Per-NS Atomic Units: Yes 00:17:46.369 Atomic Boundary Size (Normal): 0 00:17:46.369 Atomic Boundary Size (PFail): 0 00:17:46.369 Atomic Boundary Offset: 0 00:17:46.369 NGUID/EUI64 Never Reused: No 00:17:46.369 ANA group ID: 1 00:17:46.369 Namespace Write Protected: No 00:17:46.369 Number of LBA Formats: 1 00:17:46.369 Current LBA Format: LBA Format #00 00:17:46.369 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:46.369 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:46.369 rmmod nvme_tcp 00:17:46.369 rmmod nvme_fabrics 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:46.369 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:46.640 16:05:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:47.574 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:47.574 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:47.574 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:47.574 00:17:47.574 real 0m3.189s 00:17:47.574 user 0m1.155s 00:17:47.574 sys 0m1.427s 00:17:47.574 16:05:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:47.574 16:05:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.574 ************************************ 00:17:47.574 END TEST nvmf_identify_kernel_target 00:17:47.574 ************************************ 00:17:47.574 16:05:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:47.574 16:05:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:47.574 16:05:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:47.574 16:05:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:47.574 ************************************ 00:17:47.574 START TEST nvmf_auth_host 00:17:47.574 ************************************ 00:17:47.574 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:47.574 * Looking for test storage... 00:17:47.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:47.574 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:47.574 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:47.574 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:47.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.834 --rc genhtml_branch_coverage=1 00:17:47.834 --rc genhtml_function_coverage=1 00:17:47.834 --rc genhtml_legend=1 00:17:47.834 --rc geninfo_all_blocks=1 00:17:47.834 --rc geninfo_unexecuted_blocks=1 00:17:47.834 00:17:47.834 ' 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:47.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.834 --rc genhtml_branch_coverage=1 00:17:47.834 --rc genhtml_function_coverage=1 00:17:47.834 --rc genhtml_legend=1 00:17:47.834 --rc geninfo_all_blocks=1 00:17:47.834 --rc geninfo_unexecuted_blocks=1 00:17:47.834 00:17:47.834 ' 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:47.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.834 --rc genhtml_branch_coverage=1 00:17:47.834 --rc genhtml_function_coverage=1 00:17:47.834 --rc genhtml_legend=1 00:17:47.834 --rc geninfo_all_blocks=1 00:17:47.834 --rc geninfo_unexecuted_blocks=1 00:17:47.834 00:17:47.834 ' 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:47.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:47.834 --rc genhtml_branch_coverage=1 00:17:47.834 --rc genhtml_function_coverage=1 00:17:47.834 --rc genhtml_legend=1 00:17:47.834 --rc geninfo_all_blocks=1 00:17:47.834 --rc geninfo_unexecuted_blocks=1 00:17:47.834 00:17:47.834 ' 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.834 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:47.835 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:47.835 Cannot find device "nvmf_init_br" 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:47.835 Cannot find device "nvmf_init_br2" 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:47.835 Cannot find device "nvmf_tgt_br" 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:47.835 Cannot find device "nvmf_tgt_br2" 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:47.835 Cannot find device "nvmf_init_br" 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:47.835 Cannot find device "nvmf_init_br2" 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:47.835 Cannot find device "nvmf_tgt_br" 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:17:47.835 16:05:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:47.835 Cannot find device "nvmf_tgt_br2" 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:47.835 Cannot find device "nvmf_br" 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:47.835 Cannot find device "nvmf_init_if" 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:47.835 Cannot find device "nvmf_init_if2" 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:47.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:47.835 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:47.835 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:48.095 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:48.095 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:17:48.095 00:17:48.095 --- 10.0.0.3 ping statistics --- 00:17:48.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.095 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:48.095 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:48.095 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:17:48.095 00:17:48.095 --- 10.0.0.4 ping statistics --- 00:17:48.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.095 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:48.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:48.095 00:17:48.095 --- 10.0.0.1 ping statistics --- 00:17:48.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.095 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:48.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:17:48.095 00:17:48.095 --- 10.0.0.2 ping statistics --- 00:17:48.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.095 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78818 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78818 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78818 ']' 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.095 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=91206130457ef5dfe4ad219634a917b1 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4yl 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 91206130457ef5dfe4ad219634a917b1 0 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 91206130457ef5dfe4ad219634a917b1 0 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:48.697 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=91206130457ef5dfe4ad219634a917b1 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4yl 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4yl 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.4yl 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=886f3762df45141b9927a54dd0f60e29804aac4d7050f2651c740ff67cc26527 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.DDs 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 886f3762df45141b9927a54dd0f60e29804aac4d7050f2651c740ff67cc26527 3 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 886f3762df45141b9927a54dd0f60e29804aac4d7050f2651c740ff67cc26527 3 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=886f3762df45141b9927a54dd0f60e29804aac4d7050f2651c740ff67cc26527 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.DDs 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.DDs 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.DDs 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:48.698 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:48.957 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6e788a819ce3a606b9cde96b76b57b6207fb34cfac831a7f 00:17:48.957 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:48.957 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ncc 00:17:48.957 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6e788a819ce3a606b9cde96b76b57b6207fb34cfac831a7f 0 00:17:48.957 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6e788a819ce3a606b9cde96b76b57b6207fb34cfac831a7f 0 00:17:48.957 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:48.957 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:48.957 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6e788a819ce3a606b9cde96b76b57b6207fb34cfac831a7f 00:17:48.957 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:48.957 16:05:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:48.957 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ncc 00:17:48.957 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ncc 00:17:48.957 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ncc 00:17:48.957 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:48.957 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:48.957 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.957 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:48.957 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e691acbd64361361bc3acfaed4a850920e35f76cd5e05759 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.U2t 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e691acbd64361361bc3acfaed4a850920e35f76cd5e05759 2 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e691acbd64361361bc3acfaed4a850920e35f76cd5e05759 2 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e691acbd64361361bc3acfaed4a850920e35f76cd5e05759 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.U2t 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.U2t 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.U2t 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1723cc4e80e05dbf9eacf42412501329 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.b4r 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1723cc4e80e05dbf9eacf42412501329 1 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1723cc4e80e05dbf9eacf42412501329 1 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1723cc4e80e05dbf9eacf42412501329 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.b4r 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.b4r 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.b4r 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=410edd40c0ddff4f1c9886defbdb4a02 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9nL 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 410edd40c0ddff4f1c9886defbdb4a02 1 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 410edd40c0ddff4f1c9886defbdb4a02 1 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=410edd40c0ddff4f1c9886defbdb4a02 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:48.958 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9nL 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9nL 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9nL 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cc9c1f7f963ae7461f68fa68dce4de390550a8a06dbdbd7c 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.yUT 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cc9c1f7f963ae7461f68fa68dce4de390550a8a06dbdbd7c 2 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cc9c1f7f963ae7461f68fa68dce4de390550a8a06dbdbd7c 2 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cc9c1f7f963ae7461f68fa68dce4de390550a8a06dbdbd7c 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.yUT 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.yUT 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.yUT 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=badb215309250d361fb3988338cac1bf 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.F3D 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key badb215309250d361fb3988338cac1bf 0 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 badb215309250d361fb3988338cac1bf 0 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=badb215309250d361fb3988338cac1bf 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.F3D 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.F3D 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.F3D 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6b6e09d8cc5830ddba461e6ccbee558ff2e94ba31fc97744b3679fa75121ea1f 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ecd 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6b6e09d8cc5830ddba461e6ccbee558ff2e94ba31fc97744b3679fa75121ea1f 3 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6b6e09d8cc5830ddba461e6ccbee558ff2e94ba31fc97744b3679fa75121ea1f 3 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6b6e09d8cc5830ddba461e6ccbee558ff2e94ba31fc97744b3679fa75121ea1f 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ecd 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ecd 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ecd 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78818 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 78818 ']' 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.218 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4yl 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.DDs ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DDs 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ncc 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.U2t ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.U2t 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.b4r 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9nL ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9nL 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.yUT 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.F3D ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.F3D 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ecd 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:49.787 16:05:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:50.046 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:50.046 Waiting for block devices as requested 00:17:50.046 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:50.305 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:50.874 No valid GPT data, bailing 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:17:50.874 16:05:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:50.874 No valid GPT data, bailing 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:50.874 No valid GPT data, bailing 00:17:50.874 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:51.132 No valid GPT data, bailing 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:51.132 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid=ca768c1a-78f6-4242-8009-85e76e7a8123 -a 10.0.0.1 -t tcp -s 4420 00:17:51.132 00:17:51.132 Discovery Log Number of Records 2, Generation counter 2 00:17:51.132 =====Discovery Log Entry 0====== 00:17:51.132 trtype: tcp 00:17:51.132 adrfam: ipv4 00:17:51.132 subtype: current discovery subsystem 00:17:51.132 treq: not specified, sq flow control disable supported 00:17:51.132 portid: 1 00:17:51.132 trsvcid: 4420 00:17:51.132 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:51.132 traddr: 10.0.0.1 00:17:51.132 eflags: none 00:17:51.132 sectype: none 00:17:51.132 =====Discovery Log Entry 1====== 00:17:51.132 trtype: tcp 00:17:51.132 adrfam: ipv4 00:17:51.132 subtype: nvme subsystem 00:17:51.132 treq: not specified, sq flow control disable supported 00:17:51.132 portid: 1 00:17:51.132 trsvcid: 4420 00:17:51.132 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:51.132 traddr: 10.0.0.1 00:17:51.132 eflags: none 00:17:51.133 sectype: none 00:17:51.133 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:51.133 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:51.133 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:51.133 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:51.133 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.133 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.133 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.133 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:51.133 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:17:51.133 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:17:51.133 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.133 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.392 nvme0n1 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.392 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.652 nvme0n1 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.652 nvme0n1 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.652 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.653 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.653 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.912 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.912 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.912 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.912 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.912 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.912 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.912 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.912 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:51.912 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.912 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.913 16:05:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.913 nvme0n1 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:51.913 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:51.914 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.914 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.173 nvme0n1 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:52.173 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.174 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.433 nvme0n1 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:52.433 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.692 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.951 nvme0n1 00:17:52.951 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.951 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.951 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.951 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.951 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.951 16:05:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.951 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.951 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:52.951 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.951 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.951 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.952 nvme0n1 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.952 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.212 nvme0n1 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.212 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.213 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.472 nvme0n1 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.472 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.731 nvme0n1 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:53.731 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:17:53.732 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:17:53.732 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:53.732 16:05:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:54.298 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:17:54.298 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.299 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.557 nvme0n1 00:17:54.557 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.557 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.558 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.817 nvme0n1 00:17:54.817 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.817 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:54.817 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:54.817 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.817 16:05:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.817 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.076 nvme0n1 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.076 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.336 nvme0n1 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.336 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.596 nvme0n1 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.596 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.855 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.855 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.855 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:55.855 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:55.855 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:55.855 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:55.855 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:55.855 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:55.855 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:17:55.855 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:17:55.855 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:55.855 16:05:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.756 nvme0n1 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.756 16:05:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.756 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.756 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:57.756 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.016 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.276 nvme0n1 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.276 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.277 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.845 nvme0n1 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.845 16:05:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.106 nvme0n1 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.106 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.690 nvme0n1 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.690 16:05:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.257 nvme0n1 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.257 16:05:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.825 nvme0n1 00:18:00.825 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.825 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:00.825 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:00.825 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.825 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.825 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.084 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.652 nvme0n1 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:01.652 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.653 16:05:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.220 nvme0n1 00:18:02.220 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.220 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:02.220 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.220 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:02.220 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.220 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.221 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.221 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:02.221 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.221 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.480 16:06:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.049 nvme0n1 00:18:03.049 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.049 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.049 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.049 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.049 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.049 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.049 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.049 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.050 nvme0n1 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.050 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.310 nvme0n1 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.310 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.311 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.570 nvme0n1 00:18:03.570 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.571 nvme0n1 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.571 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.830 nvme0n1 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.830 16:06:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.830 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.830 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:03.830 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.830 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.830 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.830 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.830 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:03.830 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:03.830 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:03.830 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:03.830 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:03.830 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.831 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.089 nvme0n1 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:04.089 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.090 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.348 nvme0n1 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.348 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.349 nvme0n1 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.349 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.608 nvme0n1 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.608 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.903 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:04.904 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:04.904 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:04.904 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:04.904 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.904 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.904 nvme0n1 00:18:04.904 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.904 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:04.904 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:04.904 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.904 16:06:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.904 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.163 nvme0n1 00:18:05.163 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.163 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.163 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.163 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.163 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.163 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.163 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.163 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.163 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.163 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.164 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.423 nvme0n1 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.423 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.684 nvme0n1 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.684 16:06:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.944 nvme0n1 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:05.944 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:05.945 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:05.945 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:05.945 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:05.945 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:05.945 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:05.945 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.945 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.204 nvme0n1 00:18:06.204 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.204 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.204 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.204 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.204 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.204 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.204 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.204 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.204 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.204 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.205 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.772 nvme0n1 00:18:06.772 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.772 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:06.772 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:06.772 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.772 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.772 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.772 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.773 16:06:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.032 nvme0n1 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.032 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.324 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.583 nvme0n1 00:18:07.583 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.583 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:07.583 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:07.583 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.583 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.583 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.583 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.583 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:07.583 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.583 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.584 16:06:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.842 nvme0n1 00:18:07.842 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.842 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:07.842 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:07.842 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.842 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:07.842 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.101 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.360 nvme0n1 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.360 16:06:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:08.943 nvme0n1 00:18:08.944 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.944 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:09.201 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.202 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.202 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.764 nvme0n1 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.764 16:06:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.328 nvme0n1 00:18:10.328 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.328 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:10.329 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.329 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:10.329 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.587 16:06:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.152 nvme0n1 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.152 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.719 nvme0n1 00:18:11.719 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.719 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.719 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.719 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.719 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.719 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.978 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.978 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.978 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.978 16:06:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:11.978 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.979 nvme0n1 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.979 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.239 nvme0n1 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.239 nvme0n1 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.239 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.498 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.499 nvme0n1 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.499 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.758 nvme0n1 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:12.758 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:12.759 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:12.759 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:12.759 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:12.759 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:12.759 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.759 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.759 16:06:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.018 nvme0n1 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.018 nvme0n1 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.018 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:13.276 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.277 nvme0n1 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:18:13.277 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:13.536 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:13.536 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.536 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.536 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:13.536 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:13.536 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.536 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.536 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.536 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.536 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.536 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.536 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.537 nvme0n1 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.537 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.796 nvme0n1 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.796 16:06:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.055 nvme0n1 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.055 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:14.056 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.056 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.056 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.056 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.056 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.056 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.056 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.056 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.056 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.056 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.056 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.056 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.314 nvme0n1 00:18:14.314 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.314 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.314 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.314 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.314 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.314 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.314 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.314 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.314 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.314 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.314 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.314 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.315 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.648 nvme0n1 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:14.648 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.649 16:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.923 nvme0n1 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.923 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.183 nvme0n1 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.183 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.751 nvme0n1 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.751 16:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.010 nvme0n1 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.010 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.579 nvme0n1 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.579 16:06:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.838 nvme0n1 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:16.838 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.096 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.354 nvme0n1 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTEyMDYxMzA0NTdlZjVkZmU0YWQyMTk2MzRhOTE3YjGqHRP+: 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: ]] 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODg2ZjM3NjJkZjQ1MTQxYjk5MjdhNTRkZDBmNjBlMjk4MDRhYWM0ZDcwNTBmMjY1MWM3NDBmZjY3Y2MyNjUyN66TTJw=: 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.354 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.355 16:06:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.921 nvme0n1 00:18:17.921 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.921 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:17.921 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:17.921 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.921 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:17.921 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.180 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.181 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.747 nvme0n1 00:18:18.747 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.747 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:18.747 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.747 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:18.747 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.747 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.747 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.747 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:18.747 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.747 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.747 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.747 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.748 16:06:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.315 nvme0n1 00:18:19.315 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.315 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:19.315 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:19.315 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.315 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.315 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2M5YzFmN2Y5NjNhZTc0NjFmNjhmYTY4ZGNlNGRlMzkwNTUwYThhMDZkYmRiZDdjdUFXmg==: 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: ]] 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YmFkYjIxNTMwOTI1MGQzNjFmYjM5ODgzMzhjYWMxYmZm0Zpz: 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.575 16:06:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.144 nvme0n1 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NmI2ZTA5ZDhjYzU4MzBkZGJhNDYxZTZjY2JlZTU1OGZmMmU5NGJhMzFmYzk3NzQ0YjM2NzlmYTc1MTIxZWExZjOGrUQ=: 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.144 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.712 nvme0n1 00:18:20.712 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.712 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.712 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:20.712 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.712 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.712 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.970 16:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.970 request: 00:18:20.970 { 00:18:20.970 "name": "nvme0", 00:18:20.970 "trtype": "tcp", 00:18:20.970 "traddr": "10.0.0.1", 00:18:20.970 "adrfam": "ipv4", 00:18:20.970 "trsvcid": "4420", 00:18:20.970 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:20.970 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:20.970 "prchk_reftag": false, 00:18:20.970 "prchk_guard": false, 00:18:20.970 "hdgst": false, 00:18:20.970 "ddgst": false, 00:18:20.970 "allow_unrecognized_csi": false, 00:18:20.970 "method": "bdev_nvme_attach_controller", 00:18:20.970 "req_id": 1 00:18:20.970 } 00:18:20.970 Got JSON-RPC error response 00:18:20.970 response: 00:18:20.970 { 00:18:20.970 "code": -5, 00:18:20.970 "message": "Input/output error" 00:18:20.970 } 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.970 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.971 request: 00:18:20.971 { 00:18:20.971 "name": "nvme0", 00:18:20.971 "trtype": "tcp", 00:18:20.971 "traddr": "10.0.0.1", 00:18:20.971 "adrfam": "ipv4", 00:18:20.971 "trsvcid": "4420", 00:18:20.971 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:20.971 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:20.971 "prchk_reftag": false, 00:18:20.971 "prchk_guard": false, 00:18:20.971 "hdgst": false, 00:18:20.971 "ddgst": false, 00:18:20.971 "dhchap_key": "key2", 00:18:20.971 "allow_unrecognized_csi": false, 00:18:20.971 "method": "bdev_nvme_attach_controller", 00:18:20.971 "req_id": 1 00:18:20.971 } 00:18:20.971 Got JSON-RPC error response 00:18:20.971 response: 00:18:20.971 { 00:18:20.971 "code": -5, 00:18:20.971 "message": "Input/output error" 00:18:20.971 } 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:20.971 request: 00:18:20.971 { 00:18:20.971 "name": "nvme0", 00:18:20.971 "trtype": "tcp", 00:18:20.971 "traddr": "10.0.0.1", 00:18:20.971 "adrfam": "ipv4", 00:18:20.971 "trsvcid": "4420", 00:18:20.971 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:18:20.971 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:18:20.971 "prchk_reftag": false, 00:18:20.971 "prchk_guard": false, 00:18:20.971 "hdgst": false, 00:18:20.971 "ddgst": false, 00:18:20.971 "dhchap_key": "key1", 00:18:20.971 "dhchap_ctrlr_key": "ckey2", 00:18:20.971 "allow_unrecognized_csi": false, 00:18:20.971 "method": "bdev_nvme_attach_controller", 00:18:20.971 "req_id": 1 00:18:20.971 } 00:18:20.971 Got JSON-RPC error response 00:18:20.971 response: 00:18:20.971 { 00:18:20.971 "code": -5, 00:18:20.971 "message": "Input/output error" 00:18:20.971 } 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.971 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.228 nvme0n1 00:18:21.228 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.228 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:21.228 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:21.228 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.229 request: 00:18:21.229 { 00:18:21.229 "name": "nvme0", 00:18:21.229 "dhchap_key": "key1", 00:18:21.229 "dhchap_ctrlr_key": "ckey2", 00:18:21.229 "method": "bdev_nvme_set_keys", 00:18:21.229 "req_id": 1 00:18:21.229 } 00:18:21.229 Got JSON-RPC error response 00:18:21.229 response: 00:18:21.229 { 00:18:21.229 "code": -13, 00:18:21.229 "message": "Permission denied" 00:18:21.229 } 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:18:21.229 16:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU3ODhhODE5Y2UzYTYwNmI5Y2RlOTZiNzZiNTdiNjIwN2ZiMzRjZmFjODMxYTdmo/PBMA==: 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: ]] 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY5MWFjYmQ2NDM2MTM2MWJjM2FjZmFlZDRhODUwOTIwZTM1Zjc2Y2Q1ZTA1NzU57VJIKg==: 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.605 nvme0n1 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTcyM2NjNGU4MGUwNWRiZjllYWNmNDI0MTI1MDEzMjmW43C8: 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: ]] 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDEwZWRkNDBjMGRkZmY0ZjFjOTg4NmRlZmJkYjRhMDLLZ7Bm: 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.605 request: 00:18:22.605 { 00:18:22.605 "name": "nvme0", 00:18:22.605 "dhchap_key": "key2", 00:18:22.605 "dhchap_ctrlr_key": "ckey1", 00:18:22.605 "method": "bdev_nvme_set_keys", 00:18:22.605 "req_id": 1 00:18:22.605 } 00:18:22.605 Got JSON-RPC error response 00:18:22.605 response: 00:18:22.605 { 00:18:22.605 "code": -13, 00:18:22.605 "message": "Permission denied" 00:18:22.605 } 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:18:22.605 16:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:18:23.543 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:18:23.543 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:18:23.543 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.543 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:23.543 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.543 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:18:23.543 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:18:23.543 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:18:23.543 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:18:23.543 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:23.543 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:18:23.802 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:23.802 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:18:23.802 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:23.802 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:23.802 rmmod nvme_tcp 00:18:23.802 rmmod nvme_fabrics 00:18:23.802 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:23.802 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:18:23.802 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:18:23.802 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78818 ']' 00:18:23.802 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78818 00:18:23.802 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 78818 ']' 00:18:23.802 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 78818 00:18:23.802 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:18:23.803 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.803 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78818 00:18:23.803 killing process with pid 78818 00:18:23.803 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.803 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.803 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78818' 00:18:23.803 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 78818 00:18:23.803 16:06:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 78818 00:18:23.803 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:23.803 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:23.803 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:23.803 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:18:23.803 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:18:23.803 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:18:23.803 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.061 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:18:24.368 16:06:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:24.935 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:24.935 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:25.191 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:25.191 16:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.4yl /tmp/spdk.key-null.ncc /tmp/spdk.key-sha256.b4r /tmp/spdk.key-sha384.yUT /tmp/spdk.key-sha512.ecd /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:18:25.191 16:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:25.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:25.448 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:25.448 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:25.448 00:18:25.448 real 0m37.947s 00:18:25.448 user 0m34.400s 00:18:25.448 sys 0m3.901s 00:18:25.448 ************************************ 00:18:25.448 END TEST nvmf_auth_host 00:18:25.448 ************************************ 00:18:25.448 16:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.448 16:06:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.706 16:06:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:18:25.706 16:06:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.707 ************************************ 00:18:25.707 START TEST nvmf_digest 00:18:25.707 ************************************ 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:18:25.707 * Looking for test storage... 00:18:25.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:25.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.707 --rc genhtml_branch_coverage=1 00:18:25.707 --rc genhtml_function_coverage=1 00:18:25.707 --rc genhtml_legend=1 00:18:25.707 --rc geninfo_all_blocks=1 00:18:25.707 --rc geninfo_unexecuted_blocks=1 00:18:25.707 00:18:25.707 ' 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:25.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.707 --rc genhtml_branch_coverage=1 00:18:25.707 --rc genhtml_function_coverage=1 00:18:25.707 --rc genhtml_legend=1 00:18:25.707 --rc geninfo_all_blocks=1 00:18:25.707 --rc geninfo_unexecuted_blocks=1 00:18:25.707 00:18:25.707 ' 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:25.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.707 --rc genhtml_branch_coverage=1 00:18:25.707 --rc genhtml_function_coverage=1 00:18:25.707 --rc genhtml_legend=1 00:18:25.707 --rc geninfo_all_blocks=1 00:18:25.707 --rc geninfo_unexecuted_blocks=1 00:18:25.707 00:18:25.707 ' 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:25.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.707 --rc genhtml_branch_coverage=1 00:18:25.707 --rc genhtml_function_coverage=1 00:18:25.707 --rc genhtml_legend=1 00:18:25.707 --rc geninfo_all_blocks=1 00:18:25.707 --rc geninfo_unexecuted_blocks=1 00:18:25.707 00:18:25.707 ' 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.707 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:25.708 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:25.708 Cannot find device "nvmf_init_br" 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:25.708 Cannot find device "nvmf_init_br2" 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:18:25.708 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:25.968 Cannot find device "nvmf_tgt_br" 00:18:25.968 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:18:25.968 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:25.968 Cannot find device "nvmf_tgt_br2" 00:18:25.968 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:18:25.968 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:25.968 Cannot find device "nvmf_init_br" 00:18:25.968 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:18:25.968 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:25.968 Cannot find device "nvmf_init_br2" 00:18:25.968 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:18:25.968 16:06:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:25.968 Cannot find device "nvmf_tgt_br" 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:25.968 Cannot find device "nvmf_tgt_br2" 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:25.968 Cannot find device "nvmf_br" 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:25.968 Cannot find device "nvmf_init_if" 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:25.968 Cannot find device "nvmf_init_if2" 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:25.968 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:26.227 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:26.227 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:18:26.227 00:18:26.227 --- 10.0.0.3 ping statistics --- 00:18:26.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.227 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:26.227 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:26.227 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:18:26.227 00:18:26.227 --- 10.0.0.4 ping statistics --- 00:18:26.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.227 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:26.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:26.227 00:18:26.227 --- 10.0.0.1 ping statistics --- 00:18:26.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.227 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:26.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:18:26.227 00:18:26.227 --- 10.0.0.2 ping statistics --- 00:18:26.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.227 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:26.227 ************************************ 00:18:26.227 START TEST nvmf_digest_clean 00:18:26.227 ************************************ 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:26.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=80474 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 80474 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80474 ']' 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.227 16:06:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:26.227 [2024-11-20 16:06:24.397182] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:26.227 [2024-11-20 16:06:24.397531] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.485 [2024-11-20 16:06:24.553343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.485 [2024-11-20 16:06:24.620175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.485 [2024-11-20 16:06:24.620255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.485 [2024-11-20 16:06:24.620271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.485 [2024-11-20 16:06:24.620282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.485 [2024-11-20 16:06:24.620292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.485 [2024-11-20 16:06:24.620757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:27.417 [2024-11-20 16:06:25.506107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:27.417 null0 00:18:27.417 [2024-11-20 16:06:25.561373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.417 [2024-11-20 16:06:25.585517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:27.417 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:27.418 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80506 00:18:27.418 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80506 /var/tmp/bperf.sock 00:18:27.418 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80506 ']' 00:18:27.418 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:27.418 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.418 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:27.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:27.418 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.418 16:06:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:27.418 [2024-11-20 16:06:25.640864] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:27.418 [2024-11-20 16:06:25.641192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80506 ] 00:18:27.674 [2024-11-20 16:06:25.882549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.932 [2024-11-20 16:06:25.955964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.576 16:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.576 16:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:28.576 16:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:28.576 16:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:28.576 16:06:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:28.834 [2024-11-20 16:06:26.954070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:28.834 16:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:28.835 16:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:29.402 nvme0n1 00:18:29.402 16:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:29.402 16:06:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:29.402 Running I/O for 2 seconds... 00:18:31.274 14859.00 IOPS, 58.04 MiB/s [2024-11-20T16:06:29.524Z] 14922.50 IOPS, 58.29 MiB/s 00:18:31.274 Latency(us) 00:18:31.274 [2024-11-20T16:06:29.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.274 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:31.274 nvme0n1 : 2.01 14932.74 58.33 0.00 0.00 8566.30 7983.48 17635.14 00:18:31.274 [2024-11-20T16:06:29.524Z] =================================================================================================================== 00:18:31.274 [2024-11-20T16:06:29.524Z] Total : 14932.74 58.33 0.00 0.00 8566.30 7983.48 17635.14 00:18:31.274 { 00:18:31.274 "results": [ 00:18:31.274 { 00:18:31.274 "job": "nvme0n1", 00:18:31.274 "core_mask": "0x2", 00:18:31.274 "workload": "randread", 00:18:31.274 "status": "finished", 00:18:31.274 "queue_depth": 128, 00:18:31.274 "io_size": 4096, 00:18:31.274 "runtime": 2.0072, 00:18:31.274 "iops": 14932.742128337983, 00:18:31.274 "mibps": 58.331023938820245, 00:18:31.274 "io_failed": 0, 00:18:31.274 "io_timeout": 0, 00:18:31.274 "avg_latency_us": 8566.300180708091, 00:18:31.274 "min_latency_us": 7983.476363636363, 00:18:31.274 "max_latency_us": 17635.14181818182 00:18:31.274 } 00:18:31.274 ], 00:18:31.274 "core_count": 1 00:18:31.274 } 00:18:31.274 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:31.274 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:31.274 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:31.274 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:31.274 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:31.274 | select(.opcode=="crc32c") 00:18:31.274 | "\(.module_name) \(.executed)"' 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80506 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80506 ']' 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80506 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80506 00:18:31.840 killing process with pid 80506 00:18:31.840 Received shutdown signal, test time was about 2.000000 seconds 00:18:31.840 00:18:31.840 Latency(us) 00:18:31.840 [2024-11-20T16:06:30.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.840 [2024-11-20T16:06:30.090Z] =================================================================================================================== 00:18:31.840 [2024-11-20T16:06:30.090Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80506' 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80506 00:18:31.840 16:06:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80506 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80568 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80568 /var/tmp/bperf.sock 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80568 ']' 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:31.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.840 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:32.099 [2024-11-20 16:06:30.114046] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:32.099 [2024-11-20 16:06:30.114294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80568 ] 00:18:32.099 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:32.099 Zero copy mechanism will not be used. 00:18:32.099 [2024-11-20 16:06:30.255560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.099 [2024-11-20 16:06:30.317596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.099 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.099 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:32.099 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:32.099 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:32.099 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:32.665 [2024-11-20 16:06:30.632315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:32.665 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:32.665 16:06:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:32.927 nvme0n1 00:18:32.927 16:06:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:32.927 16:06:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:32.927 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:32.927 Zero copy mechanism will not be used. 00:18:32.927 Running I/O for 2 seconds... 00:18:35.239 7584.00 IOPS, 948.00 MiB/s [2024-11-20T16:06:33.489Z] 7616.00 IOPS, 952.00 MiB/s 00:18:35.239 Latency(us) 00:18:35.239 [2024-11-20T16:06:33.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.239 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:35.239 nvme0n1 : 2.00 7612.82 951.60 0.00 0.00 2098.20 1995.87 9889.98 00:18:35.239 [2024-11-20T16:06:33.489Z] =================================================================================================================== 00:18:35.239 [2024-11-20T16:06:33.489Z] Total : 7612.82 951.60 0.00 0.00 2098.20 1995.87 9889.98 00:18:35.239 { 00:18:35.239 "results": [ 00:18:35.239 { 00:18:35.239 "job": "nvme0n1", 00:18:35.239 "core_mask": "0x2", 00:18:35.239 "workload": "randread", 00:18:35.239 "status": "finished", 00:18:35.239 "queue_depth": 16, 00:18:35.239 "io_size": 131072, 00:18:35.239 "runtime": 2.002936, 00:18:35.239 "iops": 7612.824373819233, 00:18:35.239 "mibps": 951.6030467274041, 00:18:35.239 "io_failed": 0, 00:18:35.239 "io_timeout": 0, 00:18:35.239 "avg_latency_us": 2098.201690355814, 00:18:35.239 "min_latency_us": 1995.8690909090908, 00:18:35.239 "max_latency_us": 9889.978181818182 00:18:35.239 } 00:18:35.239 ], 00:18:35.239 "core_count": 1 00:18:35.239 } 00:18:35.239 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:35.239 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:35.239 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:35.240 | select(.opcode=="crc32c") 00:18:35.240 | "\(.module_name) \(.executed)"' 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80568 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80568 ']' 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80568 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80568 00:18:35.240 killing process with pid 80568 00:18:35.240 Received shutdown signal, test time was about 2.000000 seconds 00:18:35.240 00:18:35.240 Latency(us) 00:18:35.240 [2024-11-20T16:06:33.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.240 [2024-11-20T16:06:33.490Z] =================================================================================================================== 00:18:35.240 [2024-11-20T16:06:33.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80568' 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80568 00:18:35.240 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80568 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80614 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80614 /var/tmp/bperf.sock 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80614 ']' 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:35.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.498 16:06:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:35.498 [2024-11-20 16:06:33.737006] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:35.499 [2024-11-20 16:06:33.737311] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80614 ] 00:18:35.757 [2024-11-20 16:06:33.885989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.757 [2024-11-20 16:06:33.945529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.692 16:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.692 16:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:36.693 16:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:36.693 16:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:36.693 16:06:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:36.951 [2024-11-20 16:06:35.027227] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:36.951 16:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:36.951 16:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:37.210 nvme0n1 00:18:37.210 16:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:37.210 16:06:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:37.469 Running I/O for 2 seconds... 00:18:39.342 16003.00 IOPS, 62.51 MiB/s [2024-11-20T16:06:37.592Z] 16034.50 IOPS, 62.63 MiB/s 00:18:39.342 Latency(us) 00:18:39.342 [2024-11-20T16:06:37.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.342 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:39.342 nvme0n1 : 2.01 16020.46 62.58 0.00 0.00 7982.38 2517.18 16443.58 00:18:39.342 [2024-11-20T16:06:37.592Z] =================================================================================================================== 00:18:39.342 [2024-11-20T16:06:37.592Z] Total : 16020.46 62.58 0.00 0.00 7982.38 2517.18 16443.58 00:18:39.342 { 00:18:39.342 "results": [ 00:18:39.342 { 00:18:39.342 "job": "nvme0n1", 00:18:39.342 "core_mask": "0x2", 00:18:39.342 "workload": "randwrite", 00:18:39.342 "status": "finished", 00:18:39.342 "queue_depth": 128, 00:18:39.342 "io_size": 4096, 00:18:39.342 "runtime": 2.009742, 00:18:39.342 "iops": 16020.464318305534, 00:18:39.342 "mibps": 62.57993874338099, 00:18:39.342 "io_failed": 0, 00:18:39.342 "io_timeout": 0, 00:18:39.342 "avg_latency_us": 7982.382067668643, 00:18:39.342 "min_latency_us": 2517.1781818181817, 00:18:39.342 "max_latency_us": 16443.578181818182 00:18:39.342 } 00:18:39.342 ], 00:18:39.342 "core_count": 1 00:18:39.342 } 00:18:39.342 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:39.342 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:39.600 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:39.600 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:39.600 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:39.600 | select(.opcode=="crc32c") 00:18:39.600 | "\(.module_name) \(.executed)"' 00:18:39.600 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:39.600 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:39.600 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:39.600 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:39.600 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80614 00:18:39.600 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80614 ']' 00:18:39.600 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80614 00:18:39.600 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:39.858 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.858 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80614 00:18:39.858 killing process with pid 80614 00:18:39.858 Received shutdown signal, test time was about 2.000000 seconds 00:18:39.858 00:18:39.858 Latency(us) 00:18:39.858 [2024-11-20T16:06:38.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.858 [2024-11-20T16:06:38.108Z] =================================================================================================================== 00:18:39.858 [2024-11-20T16:06:38.108Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:39.858 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:39.858 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:39.858 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80614' 00:18:39.858 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80614 00:18:39.858 16:06:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80614 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80681 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80681 /var/tmp/bperf.sock 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 80681 ']' 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:39.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.858 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:40.116 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:40.116 Zero copy mechanism will not be used. 00:18:40.116 [2024-11-20 16:06:38.130525] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:40.116 [2024-11-20 16:06:38.130639] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80681 ] 00:18:40.116 [2024-11-20 16:06:38.270115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.116 [2024-11-20 16:06:38.322220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.374 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:40.374 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:18:40.374 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:40.374 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:40.374 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:40.632 [2024-11-20 16:06:38.695684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:40.632 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:40.632 16:06:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:40.890 nvme0n1 00:18:40.890 16:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:40.890 16:06:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:41.148 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:41.148 Zero copy mechanism will not be used. 00:18:41.148 Running I/O for 2 seconds... 00:18:43.026 6518.00 IOPS, 814.75 MiB/s [2024-11-20T16:06:41.276Z] 6525.50 IOPS, 815.69 MiB/s 00:18:43.026 Latency(us) 00:18:43.026 [2024-11-20T16:06:41.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.026 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:43.026 nvme0n1 : 2.00 6522.98 815.37 0.00 0.00 2447.37 1720.32 4944.99 00:18:43.026 [2024-11-20T16:06:41.276Z] =================================================================================================================== 00:18:43.026 [2024-11-20T16:06:41.276Z] Total : 6522.98 815.37 0.00 0.00 2447.37 1720.32 4944.99 00:18:43.026 { 00:18:43.026 "results": [ 00:18:43.026 { 00:18:43.026 "job": "nvme0n1", 00:18:43.026 "core_mask": "0x2", 00:18:43.026 "workload": "randwrite", 00:18:43.026 "status": "finished", 00:18:43.026 "queue_depth": 16, 00:18:43.026 "io_size": 131072, 00:18:43.026 "runtime": 2.003226, 00:18:43.026 "iops": 6522.978435783082, 00:18:43.026 "mibps": 815.3723044728853, 00:18:43.026 "io_failed": 0, 00:18:43.026 "io_timeout": 0, 00:18:43.026 "avg_latency_us": 2447.3654855743475, 00:18:43.026 "min_latency_us": 1720.32, 00:18:43.026 "max_latency_us": 4944.989090909091 00:18:43.026 } 00:18:43.026 ], 00:18:43.026 "core_count": 1 00:18:43.026 } 00:18:43.284 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:43.284 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:43.284 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:43.284 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:43.284 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:43.284 | select(.opcode=="crc32c") 00:18:43.284 | "\(.module_name) \(.executed)"' 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80681 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80681 ']' 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80681 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80681 00:18:43.542 killing process with pid 80681 00:18:43.542 Received shutdown signal, test time was about 2.000000 seconds 00:18:43.542 00:18:43.542 Latency(us) 00:18:43.542 [2024-11-20T16:06:41.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.542 [2024-11-20T16:06:41.792Z] =================================================================================================================== 00:18:43.542 [2024-11-20T16:06:41.792Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80681' 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80681 00:18:43.542 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80681 00:18:43.800 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80474 00:18:43.800 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 80474 ']' 00:18:43.800 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 80474 00:18:43.800 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:18:43.800 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.800 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80474 00:18:43.800 killing process with pid 80474 00:18:43.800 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.800 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.801 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80474' 00:18:43.801 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 80474 00:18:43.801 16:06:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 80474 00:18:43.801 00:18:43.801 real 0m17.693s 00:18:43.801 user 0m34.622s 00:18:43.801 sys 0m4.425s 00:18:43.801 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.801 ************************************ 00:18:43.801 END TEST nvmf_digest_clean 00:18:43.801 ************************************ 00:18:43.801 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:44.060 ************************************ 00:18:44.060 START TEST nvmf_digest_error 00:18:44.060 ************************************ 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:44.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=80761 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 80761 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80761 ']' 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.060 16:06:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:44.060 [2024-11-20 16:06:42.140406] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:44.060 [2024-11-20 16:06:42.140531] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.060 [2024-11-20 16:06:42.294339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.319 [2024-11-20 16:06:42.358487] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.319 [2024-11-20 16:06:42.358557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.319 [2024-11-20 16:06:42.358573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.319 [2024-11-20 16:06:42.358583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.319 [2024-11-20 16:06:42.358593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.319 [2024-11-20 16:06:42.359078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.254 [2024-11-20 16:06:43.211706] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.254 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.255 [2024-11-20 16:06:43.272417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:45.255 null0 00:18:45.255 [2024-11-20 16:06:43.325207] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.255 [2024-11-20 16:06:43.349360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80793 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80793 /var/tmp/bperf.sock 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:45.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80793 ']' 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.255 16:06:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:45.255 [2024-11-20 16:06:43.405655] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:45.255 [2024-11-20 16:06:43.405759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80793 ] 00:18:45.513 [2024-11-20 16:06:43.558274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.513 [2024-11-20 16:06:43.625840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.513 [2024-11-20 16:06:43.683424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:46.460 16:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.460 16:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:46.460 16:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:46.460 16:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:46.719 16:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:46.719 16:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.719 16:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:46.719 16:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.719 16:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:46.719 16:06:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:46.978 nvme0n1 00:18:46.978 16:06:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:46.978 16:06:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.978 16:06:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:46.978 16:06:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.978 16:06:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:46.978 16:06:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:46.978 Running I/O for 2 seconds... 00:18:47.240 [2024-11-20 16:06:45.252379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.252439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.252455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.269912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.269960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.269974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.287183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.287368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.287387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.304590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.304632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.304646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.321782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.321839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.321855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.338971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.339013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.339027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.356184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.356224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.356237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.373415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.373596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.373615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.390842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.390884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.390898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.409461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.409503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.409516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.426792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.426852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.426866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.444139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.444308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.444326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.461504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.461546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.461560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.240 [2024-11-20 16:06:45.478830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.240 [2024-11-20 16:06:45.478871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.240 [2024-11-20 16:06:45.478885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.518 [2024-11-20 16:06:45.496072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.518 [2024-11-20 16:06:45.496113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.518 [2024-11-20 16:06:45.496126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.518 [2024-11-20 16:06:45.513631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.518 [2024-11-20 16:06:45.513676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.518 [2024-11-20 16:06:45.513690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.518 [2024-11-20 16:06:45.531410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.518 [2024-11-20 16:06:45.531488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.518 [2024-11-20 16:06:45.531505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.518 [2024-11-20 16:06:45.549076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.518 [2024-11-20 16:06:45.549142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.518 [2024-11-20 16:06:45.549156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.518 [2024-11-20 16:06:45.566332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.518 [2024-11-20 16:06:45.566377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.518 [2024-11-20 16:06:45.566390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.518 [2024-11-20 16:06:45.583524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.518 [2024-11-20 16:06:45.583564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.518 [2024-11-20 16:06:45.583578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.518 [2024-11-20 16:06:45.600728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.518 [2024-11-20 16:06:45.600768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.518 [2024-11-20 16:06:45.600782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.519 [2024-11-20 16:06:45.617895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.519 [2024-11-20 16:06:45.617934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.519 [2024-11-20 16:06:45.617946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.519 [2024-11-20 16:06:45.635109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.519 [2024-11-20 16:06:45.635369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.519 [2024-11-20 16:06:45.635389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.519 [2024-11-20 16:06:45.653012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.519 [2024-11-20 16:06:45.653088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.519 [2024-11-20 16:06:45.653103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.519 [2024-11-20 16:06:45.670370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.519 [2024-11-20 16:06:45.670590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.519 [2024-11-20 16:06:45.670609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.519 [2024-11-20 16:06:45.687825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.519 [2024-11-20 16:06:45.687867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.519 [2024-11-20 16:06:45.687881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.519 [2024-11-20 16:06:45.705017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.519 [2024-11-20 16:06:45.705183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.519 [2024-11-20 16:06:45.705202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.519 [2024-11-20 16:06:45.722350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.519 [2024-11-20 16:06:45.722391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.519 [2024-11-20 16:06:45.722405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.519 [2024-11-20 16:06:45.739520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.519 [2024-11-20 16:06:45.739562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.519 [2024-11-20 16:06:45.739576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.519 [2024-11-20 16:06:45.756667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.519 [2024-11-20 16:06:45.756706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.519 [2024-11-20 16:06:45.756719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.773840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.773878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.773891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.791011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.791172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.791190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.808337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.808380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.808393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.825519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.825561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.825574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.842856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.842896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.842909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.860072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.860112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.860125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.877290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.877474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.877492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.894655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.894697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.894711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.911848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.912007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.912025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.929176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.929218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.929231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.946311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.946350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.946363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.963465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.963627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.963646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.980790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.980851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.980864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:45.997946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:45.998106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:45.998123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:47.778 [2024-11-20 16:06:46.015235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:47.778 [2024-11-20 16:06:46.015276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:47.778 [2024-11-20 16:06:46.015289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.037 [2024-11-20 16:06:46.032367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.037 [2024-11-20 16:06:46.032407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.037 [2024-11-20 16:06:46.032420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.037 [2024-11-20 16:06:46.049563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.037 [2024-11-20 16:06:46.049602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.037 [2024-11-20 16:06:46.049615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.037 [2024-11-20 16:06:46.066794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.037 [2024-11-20 16:06:46.066851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.066865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.038 [2024-11-20 16:06:46.083971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.038 [2024-11-20 16:06:46.084140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.084159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.038 [2024-11-20 16:06:46.101304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.038 [2024-11-20 16:06:46.101353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.101367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.038 [2024-11-20 16:06:46.118486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.038 [2024-11-20 16:06:46.118526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.118540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.038 [2024-11-20 16:06:46.135683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.038 [2024-11-20 16:06:46.135724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.135737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.038 [2024-11-20 16:06:46.152862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.038 [2024-11-20 16:06:46.152902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.152915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.038 [2024-11-20 16:06:46.170025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.038 [2024-11-20 16:06:46.170194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.170211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.038 [2024-11-20 16:06:46.187353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.038 [2024-11-20 16:06:46.187394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.187407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.038 [2024-11-20 16:06:46.204497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.038 [2024-11-20 16:06:46.204537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.204550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.038 [2024-11-20 16:06:46.221670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.038 [2024-11-20 16:06:46.221709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.221722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.038 14548.00 IOPS, 56.83 MiB/s [2024-11-20T16:06:46.288Z] [2024-11-20 16:06:46.238865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.038 [2024-11-20 16:06:46.238905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.238918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.038 [2024-11-20 16:06:46.255969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.038 [2024-11-20 16:06:46.256135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.256153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.038 [2024-11-20 16:06:46.273288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.038 [2024-11-20 16:06:46.273335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.038 [2024-11-20 16:06:46.273349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.297 [2024-11-20 16:06:46.290519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.297 [2024-11-20 16:06:46.290560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.297 [2024-11-20 16:06:46.290574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.297 [2024-11-20 16:06:46.307723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.297 [2024-11-20 16:06:46.307765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.297 [2024-11-20 16:06:46.307778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.297 [2024-11-20 16:06:46.324925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.324963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.324976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.298 [2024-11-20 16:06:46.349633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.349674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.349687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.298 [2024-11-20 16:06:46.366884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.366924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.366937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.298 [2024-11-20 16:06:46.384044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.384082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.384095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.298 [2024-11-20 16:06:46.401190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.401365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.401383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.298 [2024-11-20 16:06:46.418548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.418590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.418603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.298 [2024-11-20 16:06:46.435714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.435755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.435768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.298 [2024-11-20 16:06:46.452873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.452911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.452925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.298 [2024-11-20 16:06:46.469999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.470038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.470051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.298 [2024-11-20 16:06:46.487132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.487299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.487317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.298 [2024-11-20 16:06:46.504514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.504557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.504570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.298 [2024-11-20 16:06:46.522729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.522953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.522971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.298 [2024-11-20 16:06:46.540227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.298 [2024-11-20 16:06:46.540270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.298 [2024-11-20 16:06:46.540283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.557546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.557588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.557601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.574730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.574922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.574940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.592173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.592215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.592229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.609377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.609417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.609430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.626819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.626881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.626895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.644101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.644140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.644169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.661421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.661587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.661606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.678796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.679007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.679208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.696387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.696585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.696710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.714084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.714276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.714401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.731644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.731832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.731964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.749389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.749566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.749689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.766986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.767183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.767307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.784532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.784724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.784864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.558 [2024-11-20 16:06:46.802417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.558 [2024-11-20 16:06:46.802613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.558 [2024-11-20 16:06:46.802738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.817 [2024-11-20 16:06:46.819988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:46.820177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:46.820302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:46.837639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:46.837698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:46.837727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:46.855006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:46.855179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:46.855197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:46.872376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:46.872418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:46.872447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:46.889541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:46.889695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:46.889712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:46.906873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:46.906914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:46.906927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:46.923998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:46.924152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:46.924169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:46.941313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:46.941361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:46.941375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:46.958524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:46.958565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:46.958578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:46.975666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:46.975834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:46.975852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:46.992954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:46.992994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:46.993007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:47.010092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:47.010132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:47.010145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:47.027227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:47.027396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:47.027415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:47.044534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:47.044577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:47.044591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:48.818 [2024-11-20 16:06:47.061700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:48.818 [2024-11-20 16:06:47.061741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:48.818 [2024-11-20 16:06:47.061755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.077 [2024-11-20 16:06:47.078882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:49.077 [2024-11-20 16:06:47.078922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.077 [2024-11-20 16:06:47.078934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.077 [2024-11-20 16:06:47.096067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:49.077 [2024-11-20 16:06:47.096105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.077 [2024-11-20 16:06:47.096118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.077 [2024-11-20 16:06:47.113211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:49.077 [2024-11-20 16:06:47.113385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.077 [2024-11-20 16:06:47.113402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.077 [2024-11-20 16:06:47.130858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:49.077 [2024-11-20 16:06:47.130898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.077 [2024-11-20 16:06:47.130912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.077 [2024-11-20 16:06:47.148141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:49.077 [2024-11-20 16:06:47.148303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.077 [2024-11-20 16:06:47.148321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.077 [2024-11-20 16:06:47.165570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:49.077 [2024-11-20 16:06:47.165610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.077 [2024-11-20 16:06:47.165623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.077 [2024-11-20 16:06:47.183147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:49.077 [2024-11-20 16:06:47.183187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.077 [2024-11-20 16:06:47.183216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.077 [2024-11-20 16:06:47.200363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:49.077 [2024-11-20 16:06:47.200401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.077 [2024-11-20 16:06:47.200414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.077 [2024-11-20 16:06:47.217530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:49.077 [2024-11-20 16:06:47.217694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.077 [2024-11-20 16:06:47.217713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.077 14548.00 IOPS, 56.83 MiB/s [2024-11-20T16:06:47.327Z] [2024-11-20 16:06:47.236143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a7e230) 00:18:49.077 [2024-11-20 16:06:47.236183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.077 [2024-11-20 16:06:47.236197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:49.077 00:18:49.077 Latency(us) 00:18:49.077 [2024-11-20T16:06:47.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.077 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:49.077 nvme0n1 : 2.01 14593.87 57.01 0.00 0.00 8763.95 8102.63 33363.78 00:18:49.077 [2024-11-20T16:06:47.327Z] =================================================================================================================== 00:18:49.077 [2024-11-20T16:06:47.327Z] Total : 14593.87 57.01 0.00 0.00 8763.95 8102.63 33363.78 00:18:49.077 { 00:18:49.077 "results": [ 00:18:49.077 { 00:18:49.077 "job": "nvme0n1", 00:18:49.077 "core_mask": "0x2", 00:18:49.077 "workload": "randread", 00:18:49.077 "status": "finished", 00:18:49.077 "queue_depth": 128, 00:18:49.077 "io_size": 4096, 00:18:49.077 "runtime": 2.011187, 00:18:49.077 "iops": 14593.869192670796, 00:18:49.077 "mibps": 57.0073015338703, 00:18:49.077 "io_failed": 0, 00:18:49.077 "io_timeout": 0, 00:18:49.077 "avg_latency_us": 8763.951357395288, 00:18:49.077 "min_latency_us": 8102.632727272728, 00:18:49.077 "max_latency_us": 33363.781818181815 00:18:49.077 } 00:18:49.077 ], 00:18:49.077 "core_count": 1 00:18:49.077 } 00:18:49.077 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:49.077 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:49.077 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:49.077 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:49.077 | .driver_specific 00:18:49.077 | .nvme_error 00:18:49.077 | .status_code 00:18:49.078 | .command_transient_transport_error' 00:18:49.336 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 115 > 0 )) 00:18:49.336 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80793 00:18:49.336 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80793 ']' 00:18:49.336 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80793 00:18:49.336 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:49.336 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.336 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80793 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.595 killing process with pid 80793 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80793' 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80793 00:18:49.595 Received shutdown signal, test time was about 2.000000 seconds 00:18:49.595 00:18:49.595 Latency(us) 00:18:49.595 [2024-11-20T16:06:47.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.595 [2024-11-20T16:06:47.845Z] =================================================================================================================== 00:18:49.595 [2024-11-20T16:06:47.845Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80793 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80849 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80849 /var/tmp/bperf.sock 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80849 ']' 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.595 16:06:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:49.854 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:49.854 Zero copy mechanism will not be used. 00:18:49.854 [2024-11-20 16:06:47.845193] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:49.854 [2024-11-20 16:06:47.845290] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80849 ] 00:18:49.854 [2024-11-20 16:06:47.987444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.854 [2024-11-20 16:06:48.047946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.854 [2024-11-20 16:06:48.102041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:50.113 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.113 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:50.113 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:50.113 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:50.372 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:50.372 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.372 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:50.372 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.372 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:50.372 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:50.631 nvme0n1 00:18:50.631 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:50.631 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.631 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:50.632 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.632 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:50.632 16:06:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:50.898 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:50.898 Zero copy mechanism will not be used. 00:18:50.898 Running I/O for 2 seconds... 00:18:50.899 [2024-11-20 16:06:48.916618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.916698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.916714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.921008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.921053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.921068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.925315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.925384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.925399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.929605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.929650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.929664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.933987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.934047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.934061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.938222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.938285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.938298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.942498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.942560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.942574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.946883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.946931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.946944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.951125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.951186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.951200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.955429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.955489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.955502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.959755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.959826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.959840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.964059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.964104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.964118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.968329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.968389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.968404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.972671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.972732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.972746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.976980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.977040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.977054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.981254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.981314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.981340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.985549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.985595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.985609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.989799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.989869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.989884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.993949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.994008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.994021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:48.998234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:48.998295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:48.998309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:49.002499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:49.002560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.899 [2024-11-20 16:06:49.002573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.899 [2024-11-20 16:06:49.006939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.899 [2024-11-20 16:06:49.007016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.007029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.011269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.011331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.011345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.015571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.015633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.015647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.019925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.019969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.019983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.024221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.024282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.024296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.028643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.028685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.028698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.033015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.033071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.033101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.037495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.037537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.037551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.041916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.041973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.041987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.046282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.046340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.046354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.050562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.050624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.050638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.054895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.054939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.054952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.059274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.059318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.059331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.063637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.063680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.063694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.067963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.068005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.068019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.072283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.072342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.072356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.076693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.076736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.076750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.081033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.081090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.081104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.085383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.085423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.085436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.089683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.089742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.089755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.094014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.094073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.094087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.098296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.098354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.098368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.102646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.102707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.102722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.107050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.107111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.900 [2024-11-20 16:06:49.107126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.900 [2024-11-20 16:06:49.111294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.900 [2024-11-20 16:06:49.111357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.901 [2024-11-20 16:06:49.111371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.901 [2024-11-20 16:06:49.115565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.901 [2024-11-20 16:06:49.115623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.901 [2024-11-20 16:06:49.115636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.901 [2024-11-20 16:06:49.119848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.901 [2024-11-20 16:06:49.119892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.901 [2024-11-20 16:06:49.119906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.901 [2024-11-20 16:06:49.124052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.901 [2024-11-20 16:06:49.124095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.901 [2024-11-20 16:06:49.124109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:50.901 [2024-11-20 16:06:49.128331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.901 [2024-11-20 16:06:49.128389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.901 [2024-11-20 16:06:49.128402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.901 [2024-11-20 16:06:49.132552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.901 [2024-11-20 16:06:49.132612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.901 [2024-11-20 16:06:49.132626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:50.901 [2024-11-20 16:06:49.136892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.901 [2024-11-20 16:06:49.136934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.901 [2024-11-20 16:06:49.136947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:50.901 [2024-11-20 16:06:49.141125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:50.901 [2024-11-20 16:06:49.141184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:50.901 [2024-11-20 16:06:49.141197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.145530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.145573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.145586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.149956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.150015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.150029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.154246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.154304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.154318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.158609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.158672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.158686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.162895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.162956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.162970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.167175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.167237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.167250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.171477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.171540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.171554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.175804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.175876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.175890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.179992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.180037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.180050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.184247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.184307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.184321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.188568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.188604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.188618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.192774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.192825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.192840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.197051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.197091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.197105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.201370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.201410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.201423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.205637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.205680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.205693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.209999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.210040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.210053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.214415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.214472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.214487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.218805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.218877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.218890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.223222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.223283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.223297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.227541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.227600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.227613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.211 [2024-11-20 16:06:49.231751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.211 [2024-11-20 16:06:49.231794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.211 [2024-11-20 16:06:49.231806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.235968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.236013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.236027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.240301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.240359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.240372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.244619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.244669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.244682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.248938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.248983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.248996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.253290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.253370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.253386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.257699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.257757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.257771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.262024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.262081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.262095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.266352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.266414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.266427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.270682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.270746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.270760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.274952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.275012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.275026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.279206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.279268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.279282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.283409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.283470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.283484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.287683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.287743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.287756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.291935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.291977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.291990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.296246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.296304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.296334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.300626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.300668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.300682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.304890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.304930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.304943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.309188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.309232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.309246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.313458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.313499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.313512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.317837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.317877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.317891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.322111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.322153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.322167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.326495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.326537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.326551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.330798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.330852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.330866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.335044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.335085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.335107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.339364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.339407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.339421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.343684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.343729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.343743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.212 [2024-11-20 16:06:49.347986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.212 [2024-11-20 16:06:49.348029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.212 [2024-11-20 16:06:49.348042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.352256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.352298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.352311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.356578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.356624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.356638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.360854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.360899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.360912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.365169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.365214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.365228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.369495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.369539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.369553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.373709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.373754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.373768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.377959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.378002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.378015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.382214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.382257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.382270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.386504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.386546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.386559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.390802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.390855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.390868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.395111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.395155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.395169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.399405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.399448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.399465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.403662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.403707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.403721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.407915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.407957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.407972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.412156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.412197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.412211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.416384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.416426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.416439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.420660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.420701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.420714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.424963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.425004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.425017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.429190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.429234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.429248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.433459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.433500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.433514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.437676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.437718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.437731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.441944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.441984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.441998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.446189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.446235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.446249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.450455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.450501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.450515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.213 [2024-11-20 16:06:49.454674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.213 [2024-11-20 16:06:49.454720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.213 [2024-11-20 16:06:49.454734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.458957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.459001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.459015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.463259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.463303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.463316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.467479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.467522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.467536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.471769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.471823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.471838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.476000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.476043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.476057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.480220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.480261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.480274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.484477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.484519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.484533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.488798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.488853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.488867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.493090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.493131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.493145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.497451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.497494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.497508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.501736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.501779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.501792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.505994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.506035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.506049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.510279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.510325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.510339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.514542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.514585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.514599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.518892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.518933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.518946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.523157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.523216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.523231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.527484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.527545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.527559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.531758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.531827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.531842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.536161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.536207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.536221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.540492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.540553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.540565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.544776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.544848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.544864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.549083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.549143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.475 [2024-11-20 16:06:49.549156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.475 [2024-11-20 16:06:49.553384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.475 [2024-11-20 16:06:49.553424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.553438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.557702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.557745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.557758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.561998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.562060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.562073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.566321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.566382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.566396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.570661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.570704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.570718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.574901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.574943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.574957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.579159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.579203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.579216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.583438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.583484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.583498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.587758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.587802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.587828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.592086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.592143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.592156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.596464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.596507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.596521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.600740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.600790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.600804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.605085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.605145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.605159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.609509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.609554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.609568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.613951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.614008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.614022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.618305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.618363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.618377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.622690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.622733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.622747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.627063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.627124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.627139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.631346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.631405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.631418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.635738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.635782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.635795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.640091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.640134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.640148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.644300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.644357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.644370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.648562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.648619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.648633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.652915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.652957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.652970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.657124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.657181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.657194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.661433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.661473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.661486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.665753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.665822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.476 [2024-11-20 16:06:49.665838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.476 [2024-11-20 16:06:49.670077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.476 [2024-11-20 16:06:49.670136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.670149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.477 [2024-11-20 16:06:49.674394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.477 [2024-11-20 16:06:49.674453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.674467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.477 [2024-11-20 16:06:49.678695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.477 [2024-11-20 16:06:49.678753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.678767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.477 [2024-11-20 16:06:49.682884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.477 [2024-11-20 16:06:49.682940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.682953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.477 [2024-11-20 16:06:49.687090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.477 [2024-11-20 16:06:49.687147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.687161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.477 [2024-11-20 16:06:49.691352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.477 [2024-11-20 16:06:49.691410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.691423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.477 [2024-11-20 16:06:49.695631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.477 [2024-11-20 16:06:49.695689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.695702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.477 [2024-11-20 16:06:49.699914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.477 [2024-11-20 16:06:49.699956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.699970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.477 [2024-11-20 16:06:49.704084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.477 [2024-11-20 16:06:49.704125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.704137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.477 [2024-11-20 16:06:49.708260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.477 [2024-11-20 16:06:49.708317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.708331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.477 [2024-11-20 16:06:49.712556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.477 [2024-11-20 16:06:49.712614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.712627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.477 [2024-11-20 16:06:49.716730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.477 [2024-11-20 16:06:49.716771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.716785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.477 [2024-11-20 16:06:49.720897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.477 [2024-11-20 16:06:49.720939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.477 [2024-11-20 16:06:49.720952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.737 [2024-11-20 16:06:49.725176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.737 [2024-11-20 16:06:49.725232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.737 [2024-11-20 16:06:49.725246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.737 [2024-11-20 16:06:49.729491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.729531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.729544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.733793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.733862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.733876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.738003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.738059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.738072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.742273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.742331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.742344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.746515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.746574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.746588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.750831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.750887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.750900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.755053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.755110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.755124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.759257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.759315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.759328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.763549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.763608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.763621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.767845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.767887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.767900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.772098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.772140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.772154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.776330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.776389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.776403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.780630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.780674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.780688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.784851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.784904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.784918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.789106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.789147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.789160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.793382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.793423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.793437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.797694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.797751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.797765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.802034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.802091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.802104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.806320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.806389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.806403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.810632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.810691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.810704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.814920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.814977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.814990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.819185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.819243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.819257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.823459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.823516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.823530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.827753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.827822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.827837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.832144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.832187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.832201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.836450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.836491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.738 [2024-11-20 16:06:49.836504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.738 [2024-11-20 16:06:49.840765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.738 [2024-11-20 16:06:49.840823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.840839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.844964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.845036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.845050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.849361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.849402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.849415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.853596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.853649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.853663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.857860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.857900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.857914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.862231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.862288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.862302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.866466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.866511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.866525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.870787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.870842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.870856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.875034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.875077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.875090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.879325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.879365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.879379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.883628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.883672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.883686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.887971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.888014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.888028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.892266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.892316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.892329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.896551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.896600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.896613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.900835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.900878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.900892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.905044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.905088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.905102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.909285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.909333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.909348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.739 7192.00 IOPS, 899.00 MiB/s [2024-11-20T16:06:49.989Z] [2024-11-20 16:06:49.914578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.914625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.914639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.918873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.918916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.918930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.923204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.923245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.923259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.927390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.927433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.927446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.931682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.931740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.931755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.936048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.936090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.936103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.940390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.940447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.940461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.944653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.944709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.944722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.948997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.949054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.949067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.953281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.739 [2024-11-20 16:06:49.953346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.739 [2024-11-20 16:06:49.953361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.739 [2024-11-20 16:06:49.957523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.740 [2024-11-20 16:06:49.957566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.740 [2024-11-20 16:06:49.957580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.740 [2024-11-20 16:06:49.961762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.740 [2024-11-20 16:06:49.961828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.740 [2024-11-20 16:06:49.961843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.740 [2024-11-20 16:06:49.966010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.740 [2024-11-20 16:06:49.966066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.740 [2024-11-20 16:06:49.966079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:51.740 [2024-11-20 16:06:49.970267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.740 [2024-11-20 16:06:49.970325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.740 [2024-11-20 16:06:49.970339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:51.740 [2024-11-20 16:06:49.974527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.740 [2024-11-20 16:06:49.974585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.740 [2024-11-20 16:06:49.974598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:51.740 [2024-11-20 16:06:49.978844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.740 [2024-11-20 16:06:49.978902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.740 [2024-11-20 16:06:49.978915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:51.740 [2024-11-20 16:06:49.983131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:51.740 [2024-11-20 16:06:49.983188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:51.740 [2024-11-20 16:06:49.983202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.000 [2024-11-20 16:06:49.987372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.000 [2024-11-20 16:06:49.987430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.000 [2024-11-20 16:06:49.987443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.000 [2024-11-20 16:06:49.991621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.000 [2024-11-20 16:06:49.991679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.000 [2024-11-20 16:06:49.991693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:49.995929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:49.995971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:49.995983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.000140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.000182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.000194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.004400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.004457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.004470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.008712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.008771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.008784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.013065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.013122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.013136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.017318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.017366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.017380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.021559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.021601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.021614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.025869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.025927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.025941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.030151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.030209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.030223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.034461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.034519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.034533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.038711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.038768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.038782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.042999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.043063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.043078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.047306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.047364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.047377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.051552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.051611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.051625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.055730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.055788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.055801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.059980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.060021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.060033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.064307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.064364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.064378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.068627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.068685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.068698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.072910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.072968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.072981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.077205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.077263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.077277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.081489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.081531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.081544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.085791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.085859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.085873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.090109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.090166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.090179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.094412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.094470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.094484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.098666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.098708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.098722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.102907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.001 [2024-11-20 16:06:50.102965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.001 [2024-11-20 16:06:50.102980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.001 [2024-11-20 16:06:50.107134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.107175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.107189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.111330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.111371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.111385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.115493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.115551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.115564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.119916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.119958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.119972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.124311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.124369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.124399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.128685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.128727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.128741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.132994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.133037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.133050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.137227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.137284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.137297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.141401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.141440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.141453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.145729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.145772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.145785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.150132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.150190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.150204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.154504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.154562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.154575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.158822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.158875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.158889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.163145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.163202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.163216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.167487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.167545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.167559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.171899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.171941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.171954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.176129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.176171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.176184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.180413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.180471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.180484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.184784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.184856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.184870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.189036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.189094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.189108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.193388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.193426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.193439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.197700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.197759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.197773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.202039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.202096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.202110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.206407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.206476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.206506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.210911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.210953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.210967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.215326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.215385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.002 [2024-11-20 16:06:50.215399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.002 [2024-11-20 16:06:50.219610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.002 [2024-11-20 16:06:50.219652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.003 [2024-11-20 16:06:50.219666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.003 [2024-11-20 16:06:50.223972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.003 [2024-11-20 16:06:50.224013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.003 [2024-11-20 16:06:50.224026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.003 [2024-11-20 16:06:50.228332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.003 [2024-11-20 16:06:50.228374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.003 [2024-11-20 16:06:50.228387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.003 [2024-11-20 16:06:50.232634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.003 [2024-11-20 16:06:50.232676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.003 [2024-11-20 16:06:50.232689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.003 [2024-11-20 16:06:50.236965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.003 [2024-11-20 16:06:50.237004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.003 [2024-11-20 16:06:50.237017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.003 [2024-11-20 16:06:50.241370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.003 [2024-11-20 16:06:50.241411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.003 [2024-11-20 16:06:50.241424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.003 [2024-11-20 16:06:50.245783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.003 [2024-11-20 16:06:50.245872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.003 [2024-11-20 16:06:50.245887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.250272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.250329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.250343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.254677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.254747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.254761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.258944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.258986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.258998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.263176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.263215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.263229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.267460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.267517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.267546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.271773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.271858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.271875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.276223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.276281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.276311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.280497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.280553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.280582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.284816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.284871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.284885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.289198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.289242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.289255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.293547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.293590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.293604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.297958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.298014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.298043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.302489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.302535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.302549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.306733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.306791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.306804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.310816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.310885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.310899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.315099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.315156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.315170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.319394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.319451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.319480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.323637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.323696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.323725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.328029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.328070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.328100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.332293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.332351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.332380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.336643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.336701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.336730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.340910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.340966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.340995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.345271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.345335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.345349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.349644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.349722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.349736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.354274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.354316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.354329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.358792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.358862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.358876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.363299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.363343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.263 [2024-11-20 16:06:50.363357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.263 [2024-11-20 16:06:50.367508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.263 [2024-11-20 16:06:50.367565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.367594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.371801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.371871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.371901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.376090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.376147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.376177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.380428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.380485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.380514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.384752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.384837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.384852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.389249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.389291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.389305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.393729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.393769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.393783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.398145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.398203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.398217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.402365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.402422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.402435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.406627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.406683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.406713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.410828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.410881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.410894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.415150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.415207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.415220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.419525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.419569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.419583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.423832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.423874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.423887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.428135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.428177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.428191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.432333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.432376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.432388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.436540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.436582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.436596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.440832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.440873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.440886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.445172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.445216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.445230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.449376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.449414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.449428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.453647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.453691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.453704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.457981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.458023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.458036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.462275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.462319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.462332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.466512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.466556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.466569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.470826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.470868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.470881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.475084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.475127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.475140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.479363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.479406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.479420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.483656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.483700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.483714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.487872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.487913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.487926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.492140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.492180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.492193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.496372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.496415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.496429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.500638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.500680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.500694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.504904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.504945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.504959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.264 [2024-11-20 16:06:50.509200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.264 [2024-11-20 16:06:50.509241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.264 [2024-11-20 16:06:50.509255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.524 [2024-11-20 16:06:50.513506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.524 [2024-11-20 16:06:50.513548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-11-20 16:06:50.513561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.524 [2024-11-20 16:06:50.517688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.524 [2024-11-20 16:06:50.517731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-11-20 16:06:50.517744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.524 [2024-11-20 16:06:50.522007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.524 [2024-11-20 16:06:50.522048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-11-20 16:06:50.522062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.524 [2024-11-20 16:06:50.526253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.524 [2024-11-20 16:06:50.526295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-11-20 16:06:50.526309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.524 [2024-11-20 16:06:50.530500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.524 [2024-11-20 16:06:50.530543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-11-20 16:06:50.530557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.524 [2024-11-20 16:06:50.534785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.524 [2024-11-20 16:06:50.534841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-11-20 16:06:50.534856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.524 [2024-11-20 16:06:50.539069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.524 [2024-11-20 16:06:50.539112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-11-20 16:06:50.539125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.524 [2024-11-20 16:06:50.543901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.524 [2024-11-20 16:06:50.543944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-11-20 16:06:50.543957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.524 [2024-11-20 16:06:50.548247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.524 [2024-11-20 16:06:50.548291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.524 [2024-11-20 16:06:50.548306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.552531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.552575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.552589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.556885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.556926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.556940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.561136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.561178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.561191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.565483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.565529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.565543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.569865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.569907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.569921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.574194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.574236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.574249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.578444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.578487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.578501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.582764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.582826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.582842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.587077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.587119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.587133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.591375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.591418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.591431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.595643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.595687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.595701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.599884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.599924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.599937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.604246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.604288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.604302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.608500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.608541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.608554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.612736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.612780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.612793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.617039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.617081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.617094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.621420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.621463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.621477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.625702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.625747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.625766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.630231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.630272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.630286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.634507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.634549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.634563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.638743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.638786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.638800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.643017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.643060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.643073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.647310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.647351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.647365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.651591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.651633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.651647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.655875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.525 [2024-11-20 16:06:50.655917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.525 [2024-11-20 16:06:50.655931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.525 [2024-11-20 16:06:50.660204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.660247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.660261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.664478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.664522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.664538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.668757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.668798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.668823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.673048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.673090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.673103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.677205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.677248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.677261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.681530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.681571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.681584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.685793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.685850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.685864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.689989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.690029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.690052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.694212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.694253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.694266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.698466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.698508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.698522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.702686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.702726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.702739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.706965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.707007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.707020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.711220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.711263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.711277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.715472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.715515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.715528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.719704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.719743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.719757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.723941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.723981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.723994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.728198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.728241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.728254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.732459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.732502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.732515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.736717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.736760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.736774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.740959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.740999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.741012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.745223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.745265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.745278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.749515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.749556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.749569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.753747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.753790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.753803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.758081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.758123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.758136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.762298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.762341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.526 [2024-11-20 16:06:50.762354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.526 [2024-11-20 16:06:50.766579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.526 [2024-11-20 16:06:50.766622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.527 [2024-11-20 16:06:50.766636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.527 [2024-11-20 16:06:50.770874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.527 [2024-11-20 16:06:50.770911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.527 [2024-11-20 16:06:50.770925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.775168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.775211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.775225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.779417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.779461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.779474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.783660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.783703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.783716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.787946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.787988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.788001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.792198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.792240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.792253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.796475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.796518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.796531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.800756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.800799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.800826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.805003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.805046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.805060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.809221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.809262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.809276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.813512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.813554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.813568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.817845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.817892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.817906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.822142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.822185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.822199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.826409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.826466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.826480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.830650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.830693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.830707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.834989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.835032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.835045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.839550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.839595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.839608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.843828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.843872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.843885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.848088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.848131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.848145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.852409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.852452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.852465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.856730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.856774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.856788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.861125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.861178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.861192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.865480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.865537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.865552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.869885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.787 [2024-11-20 16:06:50.869940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.787 [2024-11-20 16:06:50.869955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.787 [2024-11-20 16:06:50.874221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.788 [2024-11-20 16:06:50.874274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.788 [2024-11-20 16:06:50.874287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.788 [2024-11-20 16:06:50.878557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.788 [2024-11-20 16:06:50.878614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.788 [2024-11-20 16:06:50.878628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.788 [2024-11-20 16:06:50.882839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.788 [2024-11-20 16:06:50.882887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.788 [2024-11-20 16:06:50.882901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.788 [2024-11-20 16:06:50.887078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.788 [2024-11-20 16:06:50.887122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.788 [2024-11-20 16:06:50.887135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.788 [2024-11-20 16:06:50.891286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.788 [2024-11-20 16:06:50.891329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.788 [2024-11-20 16:06:50.891343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.788 [2024-11-20 16:06:50.895577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.788 [2024-11-20 16:06:50.895620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.788 [2024-11-20 16:06:50.895633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.788 [2024-11-20 16:06:50.899822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.788 [2024-11-20 16:06:50.899863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.788 [2024-11-20 16:06:50.899878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:52.788 [2024-11-20 16:06:50.904064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.788 [2024-11-20 16:06:50.904106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.788 [2024-11-20 16:06:50.904120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:52.788 [2024-11-20 16:06:50.908271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.788 [2024-11-20 16:06:50.908314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.788 [2024-11-20 16:06:50.908328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:52.788 7192.00 IOPS, 899.00 MiB/s [2024-11-20T16:06:51.038Z] [2024-11-20 16:06:50.914191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1eb2400) 00:18:52.788 [2024-11-20 16:06:50.914231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:52.788 [2024-11-20 16:06:50.914244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:52.788 00:18:52.788 Latency(us) 00:18:52.788 [2024-11-20T16:06:51.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.788 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:52.788 nvme0n1 : 2.00 7189.10 898.64 0.00 0.00 2221.85 1921.40 7447.27 00:18:52.788 [2024-11-20T16:06:51.038Z] =================================================================================================================== 00:18:52.788 [2024-11-20T16:06:51.038Z] Total : 7189.10 898.64 0.00 0.00 2221.85 1921.40 7447.27 00:18:52.788 { 00:18:52.788 "results": [ 00:18:52.788 { 00:18:52.788 "job": "nvme0n1", 00:18:52.788 "core_mask": "0x2", 00:18:52.788 "workload": "randread", 00:18:52.788 "status": "finished", 00:18:52.788 "queue_depth": 16, 00:18:52.788 "io_size": 131072, 00:18:52.788 "runtime": 2.003032, 00:18:52.788 "iops": 7189.101322395249, 00:18:52.788 "mibps": 898.6376652994061, 00:18:52.788 "io_failed": 0, 00:18:52.788 "io_timeout": 0, 00:18:52.788 "avg_latency_us": 2221.8534787878784, 00:18:52.788 "min_latency_us": 1921.3963636363637, 00:18:52.788 "max_latency_us": 7447.272727272727 00:18:52.788 } 00:18:52.788 ], 00:18:52.788 "core_count": 1 00:18:52.788 } 00:18:52.788 16:06:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:52.788 16:06:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:52.788 16:06:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:52.788 | .driver_specific 00:18:52.788 | .nvme_error 00:18:52.788 | .status_code 00:18:52.788 | .command_transient_transport_error' 00:18:52.788 16:06:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:53.047 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 465 > 0 )) 00:18:53.047 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80849 00:18:53.047 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80849 ']' 00:18:53.047 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80849 00:18:53.047 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:53.047 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.047 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80849 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:53.306 killing process with pid 80849 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80849' 00:18:53.306 Received shutdown signal, test time was about 2.000000 seconds 00:18:53.306 00:18:53.306 Latency(us) 00:18:53.306 [2024-11-20T16:06:51.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.306 [2024-11-20T16:06:51.556Z] =================================================================================================================== 00:18:53.306 [2024-11-20T16:06:51.556Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80849 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80849 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80902 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80902 /var/tmp/bperf.sock 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80902 ']' 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.306 16:06:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:53.564 [2024-11-20 16:06:51.564420] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:53.564 [2024-11-20 16:06:51.564578] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80902 ] 00:18:53.564 [2024-11-20 16:06:51.714876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.564 [2024-11-20 16:06:51.776173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.823 [2024-11-20 16:06:51.829611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:54.400 16:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.400 16:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:54.400 16:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:54.400 16:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:54.658 16:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:54.658 16:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.658 16:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:54.658 16:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.658 16:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:54.658 16:06:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:55.226 nvme0n1 00:18:55.226 16:06:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:55.227 16:06:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.227 16:06:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:55.227 16:06:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.227 16:06:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:55.227 16:06:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:55.227 Running I/O for 2 seconds... 00:18:55.227 [2024-11-20 16:06:53.377652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efc560 00:18:55.227 [2024-11-20 16:06:53.379093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.227 [2024-11-20 16:06:53.379136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:55.227 [2024-11-20 16:06:53.393914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efcdd0 00:18:55.227 [2024-11-20 16:06:53.395295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.227 [2024-11-20 16:06:53.395333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:55.227 [2024-11-20 16:06:53.410335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efd640 00:18:55.227 [2024-11-20 16:06:53.411741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.227 [2024-11-20 16:06:53.411782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:55.227 [2024-11-20 16:06:53.426995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efdeb0 00:18:55.227 [2024-11-20 16:06:53.428399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.227 [2024-11-20 16:06:53.428443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:55.227 [2024-11-20 16:06:53.443690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efe720 00:18:55.227 [2024-11-20 16:06:53.445074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.227 [2024-11-20 16:06:53.445115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:55.227 [2024-11-20 16:06:53.460218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eff3c8 00:18:55.227 [2024-11-20 16:06:53.461550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.227 [2024-11-20 16:06:53.461594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.483390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eff3c8 00:18:55.486 [2024-11-20 16:06:53.485974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.486020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.499762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efe720 00:18:55.486 [2024-11-20 16:06:53.502287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.502329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.515999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efdeb0 00:18:55.486 [2024-11-20 16:06:53.518498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.518541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.532241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efd640 00:18:55.486 [2024-11-20 16:06:53.534706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.534748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.548416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efcdd0 00:18:55.486 [2024-11-20 16:06:53.550876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.550917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.564598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efc560 00:18:55.486 [2024-11-20 16:06:53.567040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.567080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.581873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efbcf0 00:18:55.486 [2024-11-20 16:06:53.585087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.585125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.601378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efb480 00:18:55.486 [2024-11-20 16:06:53.604501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.604542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.620754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efac10 00:18:55.486 [2024-11-20 16:06:53.623849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.623892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.640110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016efa3a0 00:18:55.486 [2024-11-20 16:06:53.643117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.643156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.659623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef9b30 00:18:55.486 [2024-11-20 16:06:53.662692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.662735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.678982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef92c0 00:18:55.486 [2024-11-20 16:06:53.681973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.682015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.698239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef8a50 00:18:55.486 [2024-11-20 16:06:53.701208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.701247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.715321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef81e0 00:18:55.486 [2024-11-20 16:06:53.717608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.486 [2024-11-20 16:06:53.717650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:55.486 [2024-11-20 16:06:53.732139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef7970 00:18:55.745 [2024-11-20 16:06:53.734524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.734574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.749180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef7100 00:18:55.746 [2024-11-20 16:06:53.751461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.751503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.765457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef6890 00:18:55.746 [2024-11-20 16:06:53.767673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.767711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.781699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef6020 00:18:55.746 [2024-11-20 16:06:53.783910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.783946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.797970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef57b0 00:18:55.746 [2024-11-20 16:06:53.800145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.800185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.814189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef4f40 00:18:55.746 [2024-11-20 16:06:53.816337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.816375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.830381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef46d0 00:18:55.746 [2024-11-20 16:06:53.832522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.832561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.847085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef3e60 00:18:55.746 [2024-11-20 16:06:53.849287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.849336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.863967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef35f0 00:18:55.746 [2024-11-20 16:06:53.866115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.866160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.880244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef2d80 00:18:55.746 [2024-11-20 16:06:53.882333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.882374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.896446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef2510 00:18:55.746 [2024-11-20 16:06:53.898523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.898564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.912788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef1ca0 00:18:55.746 [2024-11-20 16:06:53.914838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.914880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.928930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef1430 00:18:55.746 [2024-11-20 16:06:53.930955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.931006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.945126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef0bc0 00:18:55.746 [2024-11-20 16:06:53.947134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.947172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.961865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef0350 00:18:55.746 [2024-11-20 16:06:53.963929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.963981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:55.746 [2024-11-20 16:06:53.978892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eefae0 00:18:55.746 [2024-11-20 16:06:53.980886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:55.746 [2024-11-20 16:06:53.980926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:56.006 [2024-11-20 16:06:53.995152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eef270 00:18:56.006 [2024-11-20 16:06:53.997121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:9251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.006 [2024-11-20 16:06:53.997160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:56.006 [2024-11-20 16:06:54.011536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eeea00 00:18:56.006 [2024-11-20 16:06:54.013488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.006 [2024-11-20 16:06:54.013530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:56.006 [2024-11-20 16:06:54.027827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eee190 00:18:56.006 [2024-11-20 16:06:54.029732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.006 [2024-11-20 16:06:54.029775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:56.006 [2024-11-20 16:06:54.044157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eed920 00:18:56.006 [2024-11-20 16:06:54.046056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.006 [2024-11-20 16:06:54.046099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:56.006 [2024-11-20 16:06:54.060420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eed0b0 00:18:56.006 [2024-11-20 16:06:54.062316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.006 [2024-11-20 16:06:54.062358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:56.006 [2024-11-20 16:06:54.076687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eec840 00:18:56.006 [2024-11-20 16:06:54.078559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.006 [2024-11-20 16:06:54.078598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:56.007 [2024-11-20 16:06:54.093017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eebfd0 00:18:56.007 [2024-11-20 16:06:54.094868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.007 [2024-11-20 16:06:54.094912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:56.007 [2024-11-20 16:06:54.109459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eeb760 00:18:56.007 [2024-11-20 16:06:54.111287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.007 [2024-11-20 16:06:54.111325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:56.007 [2024-11-20 16:06:54.125838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eeaef0 00:18:56.007 [2024-11-20 16:06:54.127631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.007 [2024-11-20 16:06:54.127671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:56.007 [2024-11-20 16:06:54.142271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eea680 00:18:56.007 [2024-11-20 16:06:54.144043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.007 [2024-11-20 16:06:54.144082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:56.007 [2024-11-20 16:06:54.158665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee9e10 00:18:56.007 [2024-11-20 16:06:54.160449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.007 [2024-11-20 16:06:54.160499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:56.007 [2024-11-20 16:06:54.175160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee95a0 00:18:56.007 [2024-11-20 16:06:54.176957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.007 [2024-11-20 16:06:54.177000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:56.007 [2024-11-20 16:06:54.192021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee8d30 00:18:56.007 [2024-11-20 16:06:54.193757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.007 [2024-11-20 16:06:54.193801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.007 [2024-11-20 16:06:54.208294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee84c0 00:18:56.007 [2024-11-20 16:06:54.209996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.007 [2024-11-20 16:06:54.210037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:56.007 [2024-11-20 16:06:54.224577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee7c50 00:18:56.007 [2024-11-20 16:06:54.226261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.007 [2024-11-20 16:06:54.226303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:56.007 [2024-11-20 16:06:54.240819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee73e0 00:18:56.007 [2024-11-20 16:06:54.242475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.007 [2024-11-20 16:06:54.242516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.257100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee6b70 00:18:56.266 [2024-11-20 16:06:54.258763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.258804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.274073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee6300 00:18:56.266 [2024-11-20 16:06:54.275750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.275796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.290697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee5a90 00:18:56.266 [2024-11-20 16:06:54.292294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.292333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.307023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee5220 00:18:56.266 [2024-11-20 16:06:54.308626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.308669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.323516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee49b0 00:18:56.266 [2024-11-20 16:06:54.325086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.325125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.340189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee4140 00:18:56.266 [2024-11-20 16:06:54.341760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.341807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.356780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee38d0 00:18:56.266 [2024-11-20 16:06:54.358336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.358382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:56.266 15055.00 IOPS, 58.81 MiB/s [2024-11-20T16:06:54.516Z] [2024-11-20 16:06:54.373216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee3060 00:18:56.266 [2024-11-20 16:06:54.374721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.374762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.389491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee27f0 00:18:56.266 [2024-11-20 16:06:54.390976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.391014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.405757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee1f80 00:18:56.266 [2024-11-20 16:06:54.407218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.407257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.422113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee1710 00:18:56.266 [2024-11-20 16:06:54.423565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.423607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.438994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee0ea0 00:18:56.266 [2024-11-20 16:06:54.440456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.440499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.455359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee0630 00:18:56.266 [2024-11-20 16:06:54.456764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.456803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.471674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016edfdc0 00:18:56.266 [2024-11-20 16:06:54.473055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.473093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.488126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016edf550 00:18:56.266 [2024-11-20 16:06:54.489552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.489599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:56.266 [2024-11-20 16:06:54.505060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016edece0 00:18:56.266 [2024-11-20 16:06:54.506450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.266 [2024-11-20 16:06:54.506493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.521455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ede470 00:18:56.525 [2024-11-20 16:06:54.522772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.522821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.544587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eddc00 00:18:56.525 [2024-11-20 16:06:54.547164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.547207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.560898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ede470 00:18:56.525 [2024-11-20 16:06:54.563419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.563461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.577105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016edece0 00:18:56.525 [2024-11-20 16:06:54.579603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.579644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.593299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016edf550 00:18:56.525 [2024-11-20 16:06:54.595799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.595846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.610172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016edfdc0 00:18:56.525 [2024-11-20 16:06:54.612730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.612780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.627206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee0630 00:18:56.525 [2024-11-20 16:06:54.629701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.629750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.643654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee0ea0 00:18:56.525 [2024-11-20 16:06:54.646114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.646159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.660041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee1710 00:18:56.525 [2024-11-20 16:06:54.662462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.662507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.676340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee1f80 00:18:56.525 [2024-11-20 16:06:54.678739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.678782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.692642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee27f0 00:18:56.525 [2024-11-20 16:06:54.695038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.695082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.708978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee3060 00:18:56.525 [2024-11-20 16:06:54.711324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.711365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.725369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee38d0 00:18:56.525 [2024-11-20 16:06:54.727693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.727732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.741694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee4140 00:18:56.525 [2024-11-20 16:06:54.744028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.744066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:56.525 [2024-11-20 16:06:54.757979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee49b0 00:18:56.525 [2024-11-20 16:06:54.760282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.525 [2024-11-20 16:06:54.760321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:56.785 [2024-11-20 16:06:54.774262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee5220 00:18:56.785 [2024-11-20 16:06:54.776518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-11-20 16:06:54.776556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:56.785 [2024-11-20 16:06:54.790496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee5a90 00:18:56.785 [2024-11-20 16:06:54.792730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-11-20 16:06:54.792766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:56.785 [2024-11-20 16:06:54.806688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee6300 00:18:56.785 [2024-11-20 16:06:54.808915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-11-20 16:06:54.808951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:56.785 [2024-11-20 16:06:54.822935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee6b70 00:18:56.785 [2024-11-20 16:06:54.825145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-11-20 16:06:54.825184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:56.785 [2024-11-20 16:06:54.839238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee73e0 00:18:56.785 [2024-11-20 16:06:54.841429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-11-20 16:06:54.841472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:56.785 [2024-11-20 16:06:54.855898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee7c50 00:18:56.785 [2024-11-20 16:06:54.858179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-11-20 16:06:54.858227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:56.785 [2024-11-20 16:06:54.872496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee84c0 00:18:56.785 [2024-11-20 16:06:54.874660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-11-20 16:06:54.874705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:56.785 [2024-11-20 16:06:54.888831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee8d30 00:18:56.785 [2024-11-20 16:06:54.890972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-11-20 16:06:54.891015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:56.785 [2024-11-20 16:06:54.905156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee95a0 00:18:56.786 [2024-11-20 16:06:54.907286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-11-20 16:06:54.907327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:56.786 [2024-11-20 16:06:54.921547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ee9e10 00:18:56.786 [2024-11-20 16:06:54.923652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-11-20 16:06:54.923691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:56.786 [2024-11-20 16:06:54.938123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eea680 00:18:56.786 [2024-11-20 16:06:54.940231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-11-20 16:06:54.940271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:56.786 [2024-11-20 16:06:54.954607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eeaef0 00:18:56.786 [2024-11-20 16:06:54.956673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-11-20 16:06:54.956712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:56.786 [2024-11-20 16:06:54.970967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eeb760 00:18:56.786 [2024-11-20 16:06:54.973010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-11-20 16:06:54.973049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:56.786 [2024-11-20 16:06:54.987220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eebfd0 00:18:56.786 [2024-11-20 16:06:54.989226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-11-20 16:06:54.989264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:56.786 [2024-11-20 16:06:55.003467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eec840 00:18:56.786 [2024-11-20 16:06:55.005491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-11-20 16:06:55.005537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:56.786 [2024-11-20 16:06:55.020316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eed0b0 00:18:56.786 [2024-11-20 16:06:55.022368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-11-20 16:06:55.022414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:57.055 [2024-11-20 16:06:55.037069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eed920 00:18:57.055 [2024-11-20 16:06:55.039049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.055 [2024-11-20 16:06:55.039089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:57.055 [2024-11-20 16:06:55.053386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eee190 00:18:57.055 [2024-11-20 16:06:55.055328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.055 [2024-11-20 16:06:55.055366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:57.055 [2024-11-20 16:06:55.069764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eeea00 00:18:57.055 [2024-11-20 16:06:55.071688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.055 [2024-11-20 16:06:55.071726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:57.055 [2024-11-20 16:06:55.086007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eef270 00:18:57.055 [2024-11-20 16:06:55.087893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.055 [2024-11-20 16:06:55.087931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:57.055 [2024-11-20 16:06:55.102272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016eefae0 00:18:57.056 [2024-11-20 16:06:55.104177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.104215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:57.056 [2024-11-20 16:06:55.118559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef0350 00:18:57.056 [2024-11-20 16:06:55.120435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.120473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:57.056 [2024-11-20 16:06:55.135075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef0bc0 00:18:57.056 [2024-11-20 16:06:55.136934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.136972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:57.056 [2024-11-20 16:06:55.151505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef1430 00:18:57.056 [2024-11-20 16:06:55.153357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.153400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:57.056 [2024-11-20 16:06:55.168071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef1ca0 00:18:57.056 [2024-11-20 16:06:55.169934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.169980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:57.056 [2024-11-20 16:06:55.184479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef2510 00:18:57.056 [2024-11-20 16:06:55.186276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.186316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:57.056 [2024-11-20 16:06:55.200807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef2d80 00:18:57.056 [2024-11-20 16:06:55.202575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.202618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:57.056 [2024-11-20 16:06:55.217113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef35f0 00:18:57.056 [2024-11-20 16:06:55.218872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.218912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:57.056 [2024-11-20 16:06:55.233470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef3e60 00:18:57.056 [2024-11-20 16:06:55.235207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.235245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:57.056 [2024-11-20 16:06:55.250085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef46d0 00:18:57.056 [2024-11-20 16:06:55.251866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.251909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:57.056 [2024-11-20 16:06:55.266670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef4f40 00:18:57.056 [2024-11-20 16:06:55.268366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.268404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:57.056 [2024-11-20 16:06:55.282970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef57b0 00:18:57.056 [2024-11-20 16:06:55.284626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.284664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:57.056 [2024-11-20 16:06:55.299362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef6020 00:18:57.056 [2024-11-20 16:06:55.301033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.056 [2024-11-20 16:06:55.301073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:57.314 [2024-11-20 16:06:55.315869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef6890 00:18:57.314 [2024-11-20 16:06:55.317551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.314 [2024-11-20 16:06:55.317597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:57.314 [2024-11-20 16:06:55.332627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef7100 00:18:57.314 [2024-11-20 16:06:55.334291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.314 [2024-11-20 16:06:55.334335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:57.314 [2024-11-20 16:06:55.349544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef7970 00:18:57.314 [2024-11-20 16:06:55.351159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.314 [2024-11-20 16:06:55.351203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:57.314 15244.50 IOPS, 59.55 MiB/s [2024-11-20T16:06:55.564Z] [2024-11-20 16:06:55.366223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x188bae0) with pdu=0x200016ef81e0 00:18:57.314 [2024-11-20 16:06:55.367822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:57.314 [2024-11-20 16:06:55.367859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:57.314 00:18:57.314 Latency(us) 00:18:57.314 [2024-11-20T16:06:55.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.314 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:57.314 nvme0n1 : 2.01 15262.63 59.62 0.00 0.00 8379.94 2353.34 31218.97 00:18:57.314 [2024-11-20T16:06:55.564Z] =================================================================================================================== 00:18:57.314 [2024-11-20T16:06:55.564Z] Total : 15262.63 59.62 0.00 0.00 8379.94 2353.34 31218.97 00:18:57.314 { 00:18:57.314 "results": [ 00:18:57.314 { 00:18:57.314 "job": "nvme0n1", 00:18:57.314 "core_mask": "0x2", 00:18:57.314 "workload": "randwrite", 00:18:57.314 "status": "finished", 00:18:57.314 "queue_depth": 128, 00:18:57.314 "io_size": 4096, 00:18:57.314 "runtime": 2.006011, 00:18:57.314 "iops": 15262.628171031964, 00:18:57.314 "mibps": 59.61964129309361, 00:18:57.314 "io_failed": 0, 00:18:57.314 "io_timeout": 0, 00:18:57.314 "avg_latency_us": 8379.94384723876, 00:18:57.314 "min_latency_us": 2353.338181818182, 00:18:57.314 "max_latency_us": 31218.967272727274 00:18:57.314 } 00:18:57.314 ], 00:18:57.314 "core_count": 1 00:18:57.314 } 00:18:57.314 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:57.314 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:57.314 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:57.314 | .driver_specific 00:18:57.314 | .nvme_error 00:18:57.314 | .status_code 00:18:57.314 | .command_transient_transport_error' 00:18:57.314 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:57.573 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 120 > 0 )) 00:18:57.573 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80902 00:18:57.573 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80902 ']' 00:18:57.573 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80902 00:18:57.573 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:18:57.573 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.573 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80902 00:18:57.573 killing process with pid 80902 00:18:57.573 Received shutdown signal, test time was about 2.000000 seconds 00:18:57.573 00:18:57.573 Latency(us) 00:18:57.573 [2024-11-20T16:06:55.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.573 [2024-11-20T16:06:55.823Z] =================================================================================================================== 00:18:57.573 [2024-11-20T16:06:55.823Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.573 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:57.573 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:57.573 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80902' 00:18:57.573 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80902 00:18:57.573 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80902 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80962 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80962 /var/tmp/bperf.sock 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 80962 ']' 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:57.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.832 16:06:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:57.832 [2024-11-20 16:06:55.993281] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:18:57.832 [2024-11-20 16:06:55.993722] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80962 ] 00:18:57.832 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:57.832 Zero copy mechanism will not be used. 00:18:58.090 [2024-11-20 16:06:56.139904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.090 [2024-11-20 16:06:56.203786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.090 [2024-11-20 16:06:56.257640] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:59.023 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.023 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:18:59.023 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:59.023 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:59.282 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:59.282 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.282 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:59.282 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.282 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:59.282 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:59.540 nvme0n1 00:18:59.540 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:59.540 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.540 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:59.540 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.540 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:59.540 16:06:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:59.800 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:59.800 Zero copy mechanism will not be used. 00:18:59.800 Running I/O for 2 seconds... 00:18:59.800 [2024-11-20 16:06:57.820762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.800 [2024-11-20 16:06:57.820882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.800 [2024-11-20 16:06:57.820914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:59.800 [2024-11-20 16:06:57.826183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.800 [2024-11-20 16:06:57.826253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.826279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.831301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.831371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.831395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.836388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.836460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.836484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.841569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.841641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.841665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.846706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.846944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.846968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.852022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.852093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.852117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.857127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.857220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.857244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.862355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.862549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.862573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.867669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.867741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.867765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.872934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.873038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.873062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.878195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.878281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.878304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.883467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.883572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.883595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.888747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.888870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.888894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.894025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.894097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.894119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.899313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.899397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.899420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.904555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.904646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.904669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.909847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.909949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.909971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.915089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.915155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.915177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.920373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.920458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.920480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.925578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.925792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.925815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.931035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.931123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.931145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.936166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.936268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.936290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.941450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.941646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.941668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.946839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.946954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.801 [2024-11-20 16:06:57.946977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:59.801 [2024-11-20 16:06:57.952159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.801 [2024-11-20 16:06:57.952256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:57.952278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:57.957365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:57.957569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:57.957592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:57.962752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:57.962844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:57.962881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:57.968073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:57.968157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:57.968180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:57.973418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:57.973638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:57.973661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:57.978827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:57.978918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:57.978941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:57.984062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:57.984133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:57.984156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:57.989150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:57.989226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:57.989249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:57.994309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:57.994386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:57.994409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:57.999404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:57.999475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:57.999498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:58.004502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:58.004580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:58.004604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:58.009670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:58.009893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:58.009916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:58.014858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:58.014933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:58.014956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:58.019981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:58.020058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:58.020081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:58.025078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:58.025150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:58.025173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:58.030198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:58.030277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:58.030300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:58.035252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:58.035324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:58.035347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:58.040318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:58.040393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:58.040417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:59.802 [2024-11-20 16:06:58.045361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:18:59.802 [2024-11-20 16:06:58.045564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:59.802 [2024-11-20 16:06:58.045587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.050622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.050701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.063 [2024-11-20 16:06:58.050724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.055738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.055823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.063 [2024-11-20 16:06:58.055847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.060854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.060926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.063 [2024-11-20 16:06:58.060948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.065970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.066041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.063 [2024-11-20 16:06:58.066065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.071062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.071134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.063 [2024-11-20 16:06:58.071157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.076174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.076375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.063 [2024-11-20 16:06:58.076397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.081615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.081862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.063 [2024-11-20 16:06:58.082129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.086878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.087093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.063 [2024-11-20 16:06:58.087326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.092156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.092386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.063 [2024-11-20 16:06:58.092613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.097432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.097658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.063 [2024-11-20 16:06:58.097863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.102768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.103011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.063 [2024-11-20 16:06:58.103182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.108094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.108320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.063 [2024-11-20 16:06:58.108490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.063 [2024-11-20 16:06:58.113360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.063 [2024-11-20 16:06:58.113571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.113734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.118623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.118854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.118884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.123939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.124018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.124042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.129033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.129105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.129128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.134142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.134213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.134236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.139233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.139427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.139450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.144553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.144636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.144659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.149607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.149678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.149704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.154738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.154973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.154998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.160055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.160133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.160156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.165184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.165259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.165282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.170312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.170497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.170520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.175593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.175674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.175697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.180688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.180756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.180779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.185838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.185910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.185933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.190921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.191007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.191030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.196107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.196183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.196206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.201257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.201341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.201365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.206308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.206510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.206533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.211654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.211728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.211751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.216775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.216869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.216893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.221989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.222078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.222101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.064 [2024-11-20 16:06:58.227189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.064 [2024-11-20 16:06:58.227255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.064 [2024-11-20 16:06:58.227278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.232386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.232473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.232495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.237497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.237708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.237731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.242906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.243011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.243034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.247953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.248043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.248065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.253015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.253112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.253134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.258105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.258196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.258220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.263358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.263430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.263453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.268497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.268611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.268635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.273781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.274036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.274059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.279150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.279234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.279256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.284369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.284450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.284471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.289555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.289749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.289771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.294934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.295017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.295038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.299978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.300061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.300084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.065 [2024-11-20 16:06:58.304898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.065 [2024-11-20 16:06:58.304982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.065 [2024-11-20 16:06:58.305004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.309806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.310070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.310092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.314945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.315033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.315055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.319933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.320038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.320060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.324925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.325022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.325045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.329998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.330086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.330108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.334983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.335066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.335087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.339923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.340007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.340029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.344860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.344954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.344992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.349910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.349993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.350015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.354848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.354932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.354954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.359840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.359921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.359947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.364858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.364938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.364962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.369782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.370058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.370080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.375058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.375141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.375162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.380212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.380301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.380323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.325 [2024-11-20 16:06:58.385424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.325 [2024-11-20 16:06:58.385620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.325 [2024-11-20 16:06:58.385643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.390787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.390890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.390913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.396010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.396083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.396106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.401092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.401163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.401186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.406351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.406436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.406457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.411556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.411657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.411679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.416918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.416991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.417014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.422240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.422321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.422343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.427428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.427516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.427537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.432678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.432761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.432783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.437701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.437947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.437971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.443116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.443374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.443631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.448513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.448763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.448949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.453888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.454137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.454317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.459165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.459408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.459560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.464523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.464772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.464959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.469848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.470114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.470321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.475388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.475682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.475977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.480825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.481144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.481173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.486140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.486243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.486266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.491187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.491286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.491307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.496413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.496500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.496522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.501476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.501733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.501757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.507036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.507136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.507159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.512390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.512496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.512519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.517736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.517976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.518001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.523368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.523616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.523788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.326 [2024-11-20 16:06:58.528826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.326 [2024-11-20 16:06:58.529083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.326 [2024-11-20 16:06:58.529372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.327 [2024-11-20 16:06:58.534271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.327 [2024-11-20 16:06:58.534541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.327 [2024-11-20 16:06:58.534788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.327 [2024-11-20 16:06:58.539707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.327 [2024-11-20 16:06:58.539955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.327 [2024-11-20 16:06:58.540208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.327 [2024-11-20 16:06:58.545190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.327 [2024-11-20 16:06:58.545442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.327 [2024-11-20 16:06:58.545643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.327 [2024-11-20 16:06:58.550565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.327 [2024-11-20 16:06:58.550821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.327 [2024-11-20 16:06:58.551055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.327 [2024-11-20 16:06:58.556113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.327 [2024-11-20 16:06:58.556378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.327 [2024-11-20 16:06:58.556636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.327 [2024-11-20 16:06:58.561519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.327 [2024-11-20 16:06:58.561794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.327 [2024-11-20 16:06:58.561985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.327 [2024-11-20 16:06:58.567063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.327 [2024-11-20 16:06:58.567328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.327 [2024-11-20 16:06:58.567607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.572606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.572860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.573034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.578013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.578290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.578504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.583445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.583699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.583922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.588792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.589038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.589067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.594117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.594241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.594264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.599184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.599310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.599333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.604375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.604482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.604505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.609538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.609791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.609814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.614727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.614881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.614905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.619848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.619950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.619987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.624993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.625094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.625116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.630195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.630302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.630325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.635313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.635401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.635424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.640459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.640545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.640583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.645746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.646005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.646028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.651167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.651281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.651304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.656260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.656349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.656372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.661323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.661553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.661577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.666529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.666635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.666658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.671656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.671764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.671786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.647 [2024-11-20 16:06:58.676789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.647 [2024-11-20 16:06:58.676936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.647 [2024-11-20 16:06:58.676960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.682053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.682141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.682164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.687151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.687226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.687249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.692270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.692377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.692400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.697318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.697579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.697602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.702609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.702719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.702741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.707883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.708052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.708075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.713139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.713239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.713262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.718320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.718394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.718417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.723387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.723460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.723483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.728606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.728705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.728728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.733754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.733980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.734004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.739069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.739145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.739168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.744119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.744226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.744250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.749229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.749479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.749502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.754598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.754695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.754718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.759738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.759829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.759868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.764852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.764962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.764985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.770021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.770117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.770141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.775142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.775214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.775238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.780255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.780468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.780492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.785661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.785738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.785762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.790820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.790932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.790956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.795905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.796000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.796023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.801058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.801147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.801170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.806246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.806320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.806343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.648 [2024-11-20 16:06:58.811362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.811557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.648 [2024-11-20 16:06:58.811581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.648 5912.00 IOPS, 739.00 MiB/s [2024-11-20T16:06:58.898Z] [2024-11-20 16:06:58.817692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.648 [2024-11-20 16:06:58.817774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.649 [2024-11-20 16:06:58.817799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.649 [2024-11-20 16:06:58.822828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.649 [2024-11-20 16:06:58.822923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.649 [2024-11-20 16:06:58.822947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.649 [2024-11-20 16:06:58.828004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.649 [2024-11-20 16:06:58.828077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.649 [2024-11-20 16:06:58.828101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.649 [2024-11-20 16:06:58.833143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.649 [2024-11-20 16:06:58.833249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.649 [2024-11-20 16:06:58.833271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.649 [2024-11-20 16:06:58.838412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.649 [2024-11-20 16:06:58.838520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.649 [2024-11-20 16:06:58.838543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.649 [2024-11-20 16:06:58.843671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.649 [2024-11-20 16:06:58.843901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.649 [2024-11-20 16:06:58.843925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.649 [2024-11-20 16:06:58.849081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.649 [2024-11-20 16:06:58.849169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.649 [2024-11-20 16:06:58.849192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.649 [2024-11-20 16:06:58.854378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.649 [2024-11-20 16:06:58.854469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.649 [2024-11-20 16:06:58.854492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.649 [2024-11-20 16:06:58.859629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.649 [2024-11-20 16:06:58.859881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.649 [2024-11-20 16:06:58.859920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.649 [2024-11-20 16:06:58.864949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.649 [2024-11-20 16:06:58.865055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.649 [2024-11-20 16:06:58.865078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.649 [2024-11-20 16:06:58.870146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.649 [2024-11-20 16:06:58.870236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.649 [2024-11-20 16:06:58.870259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.649 [2024-11-20 16:06:58.875218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.649 [2024-11-20 16:06:58.875414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.649 [2024-11-20 16:06:58.875437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.911 [2024-11-20 16:06:58.880535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.911 [2024-11-20 16:06:58.880642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.911 [2024-11-20 16:06:58.880664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.911 [2024-11-20 16:06:58.885822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.911 [2024-11-20 16:06:58.885966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.911 [2024-11-20 16:06:58.885989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.911 [2024-11-20 16:06:58.890900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.911 [2024-11-20 16:06:58.890996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.911 [2024-11-20 16:06:58.891018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.911 [2024-11-20 16:06:58.896050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.911 [2024-11-20 16:06:58.896124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.911 [2024-11-20 16:06:58.896146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.901039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.901145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.901167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.906250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.906488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.906511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.911930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.912189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.912388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.917407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.917660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.917917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.922782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.923061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.923222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.928162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.928402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.928682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.933397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.933623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.933904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.938694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.938934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.939092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.943938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.944192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.944350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.949370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.949670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.949895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.954732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.954977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.955159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.960022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.960263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.960450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.965431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.965649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.965859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.970807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.971050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.971200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.976063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.976301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.976464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.981495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.981738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.981987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.986831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.987068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.987236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.992204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.992430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.992580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:58.997667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:58.997992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:58.998187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:59.003111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:59.003400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:59.003587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:59.008501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:59.008728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:59.008893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:59.013845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:59.014090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:59.014296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:59.019117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:59.019348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:59.019516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:59.024342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:59.024531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:59.024555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:59.029764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:59.030013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:59.030163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:59.035013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:59.035271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:59.035433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.912 [2024-11-20 16:06:59.040265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.912 [2024-11-20 16:06:59.040510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.912 [2024-11-20 16:06:59.040695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.045561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.045771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.045944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.050891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.051134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.051340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.056227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.056469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.056494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.061511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.061699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.061723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.066919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.067147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.067311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.072196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.072436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.072604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.077462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.077697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.077922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.082732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.082987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.083215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.088050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.088260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.088412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.093287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.093509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.093717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.098594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.098846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.099013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.103916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.104148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.104315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.109212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.109461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.109625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.114481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.114697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.114943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.119766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.119855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.119879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.124884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.124991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.125014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.130009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.130102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.130125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.135060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.135170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.135193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.140154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.140251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.140274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.145268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.145396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.145420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.150429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.150521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.150546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.913 [2024-11-20 16:06:59.155577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:00.913 [2024-11-20 16:06:59.155669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.913 [2024-11-20 16:06:59.155693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.173 [2024-11-20 16:06:59.160795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.173 [2024-11-20 16:06:59.161043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.173 [2024-11-20 16:06:59.161067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.173 [2024-11-20 16:06:59.166193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.173 [2024-11-20 16:06:59.166299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.173 [2024-11-20 16:06:59.166322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.173 [2024-11-20 16:06:59.171257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.173 [2024-11-20 16:06:59.171345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.173 [2024-11-20 16:06:59.171368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.173 [2024-11-20 16:06:59.176315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.176518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.176541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.181533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.181629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.181651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.186680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.186788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.186811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.191754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.191880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.191903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.196853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.196939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.196963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.201925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.202020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.202043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.206987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.207096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.207119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.212042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.212148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.212171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.217152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.217260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.217283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.222177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.222261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.222283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.227367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.227453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.227476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.232560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.232808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.232832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.237873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.237979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.238001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.242895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.242997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.243019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.247921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.248022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.248045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.252948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.253053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.253075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.258031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.258137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.258160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.263144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.263238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.263260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.268352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.268435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.268456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.273490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.273563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.273586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.278600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.278695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.278717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.283759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.284027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.284050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.289121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.289211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.289233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.294403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.294508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.294531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.299628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.299867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.299893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.304982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.305078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.305101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.310108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.310196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.310219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.315273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.315505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.315527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.320808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.320914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.320937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.326059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.326182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.326205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.331304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.331390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.331412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.336602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.336695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.336718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.341757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.341882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.341905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.346888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.346986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.347008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.174 [2024-11-20 16:06:59.352074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.174 [2024-11-20 16:06:59.352197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.174 [2024-11-20 16:06:59.352220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.357231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.175 [2024-11-20 16:06:59.357391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.175 [2024-11-20 16:06:59.357413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.362502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.175 [2024-11-20 16:06:59.362778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.175 [2024-11-20 16:06:59.362801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.367823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.175 [2024-11-20 16:06:59.367931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.175 [2024-11-20 16:06:59.367954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.373169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.175 [2024-11-20 16:06:59.373263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.175 [2024-11-20 16:06:59.373286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.378256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.175 [2024-11-20 16:06:59.378490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.175 [2024-11-20 16:06:59.378514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.383678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.175 [2024-11-20 16:06:59.383922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.175 [2024-11-20 16:06:59.384086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.389079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.175 [2024-11-20 16:06:59.389345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.175 [2024-11-20 16:06:59.389553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.394360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.175 [2024-11-20 16:06:59.394604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.175 [2024-11-20 16:06:59.394756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.399695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.175 [2024-11-20 16:06:59.399968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.175 [2024-11-20 16:06:59.400224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.405050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.175 [2024-11-20 16:06:59.405357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.175 [2024-11-20 16:06:59.405624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.410455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.175 [2024-11-20 16:06:59.410698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.175 [2024-11-20 16:06:59.410883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.415725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.175 [2024-11-20 16:06:59.415982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.175 [2024-11-20 16:06:59.416204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.175 [2024-11-20 16:06:59.421044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.421288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.421461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.435 [2024-11-20 16:06:59.426307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.426563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.426733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.435 [2024-11-20 16:06:59.431660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.431869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.431894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.435 [2024-11-20 16:06:59.437041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.437138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.437161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.435 [2024-11-20 16:06:59.442267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.442477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.442500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.435 [2024-11-20 16:06:59.447749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.447869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.447905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.435 [2024-11-20 16:06:59.453055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.453132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.453156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.435 [2024-11-20 16:06:59.458340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.458428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.458452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.435 [2024-11-20 16:06:59.463667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.463773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.463797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.435 [2024-11-20 16:06:59.469066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.469160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.469184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.435 [2024-11-20 16:06:59.474347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.474423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.474447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.435 [2024-11-20 16:06:59.479605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.479692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.479715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.435 [2024-11-20 16:06:59.484893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.435 [2024-11-20 16:06:59.485017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.435 [2024-11-20 16:06:59.485040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.490170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.490275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.490297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.495378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.495472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.495495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.500586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.500662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.500685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.505814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.506063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.506086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.511272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.511348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.511372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.516652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.516727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.516751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.521896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.521973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.521996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.527087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.527170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.527193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.532315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.532389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.532412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.537527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.537728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.537752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.542942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.543051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.543075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.548112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.548233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.548256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.553364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.553565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.553588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.558706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.558797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.558821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.563866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.563974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.563997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.568973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.569081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.569103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.574155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.574242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.574271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.579333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.579444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.579466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.584559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.584638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.584661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.589755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.590009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.590033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.595135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.595250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.595273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.600378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.600455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.600478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.605623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.605856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.605879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.611008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.611089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.611112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.616157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.616231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.616253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.621319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.621573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.621596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.436 [2024-11-20 16:06:59.626629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.436 [2024-11-20 16:06:59.626724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.436 [2024-11-20 16:06:59.626747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.437 [2024-11-20 16:06:59.631691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.437 [2024-11-20 16:06:59.631786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.437 [2024-11-20 16:06:59.631822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.437 [2024-11-20 16:06:59.636775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.437 [2024-11-20 16:06:59.636888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.437 [2024-11-20 16:06:59.636911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.437 [2024-11-20 16:06:59.641874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.437 [2024-11-20 16:06:59.641969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.437 [2024-11-20 16:06:59.641991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.437 [2024-11-20 16:06:59.646903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.437 [2024-11-20 16:06:59.646975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.437 [2024-11-20 16:06:59.646998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.437 [2024-11-20 16:06:59.651941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.437 [2024-11-20 16:06:59.652035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.437 [2024-11-20 16:06:59.652057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.437 [2024-11-20 16:06:59.656999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.437 [2024-11-20 16:06:59.657072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.437 [2024-11-20 16:06:59.657095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.437 [2024-11-20 16:06:59.662078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.437 [2024-11-20 16:06:59.662172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.437 [2024-11-20 16:06:59.662195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.437 [2024-11-20 16:06:59.667128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.437 [2024-11-20 16:06:59.667201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.437 [2024-11-20 16:06:59.667224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.437 [2024-11-20 16:06:59.672253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.437 [2024-11-20 16:06:59.672346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.437 [2024-11-20 16:06:59.672369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.437 [2024-11-20 16:06:59.677310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.437 [2024-11-20 16:06:59.677545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.437 [2024-11-20 16:06:59.677567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.682574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.682670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.682693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.687625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.687731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.687753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.692708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.692783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.692806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.697828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.697922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.697944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.702855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.702929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.702952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.707890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.707963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.707986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.712985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.713060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.713082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.718017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.718089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.718113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.723103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.723176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.723199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.728141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.728234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.728257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.733221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.733315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.733349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.738279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.738372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.738395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.743336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.743410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.743433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.748349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.748570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.748592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.753616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.753716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.753739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.758697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.696 [2024-11-20 16:06:59.758771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.696 [2024-11-20 16:06:59.758794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.696 [2024-11-20 16:06:59.763757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.697 [2024-11-20 16:06:59.763871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.697 [2024-11-20 16:06:59.763894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.697 [2024-11-20 16:06:59.768795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.697 [2024-11-20 16:06:59.769033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.697 [2024-11-20 16:06:59.769055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.697 [2024-11-20 16:06:59.774033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.697 [2024-11-20 16:06:59.774108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.697 [2024-11-20 16:06:59.774130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.697 [2024-11-20 16:06:59.779104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.697 [2024-11-20 16:06:59.779185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.697 [2024-11-20 16:06:59.779207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.697 [2024-11-20 16:06:59.784140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.697 [2024-11-20 16:06:59.784234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.697 [2024-11-20 16:06:59.784257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.697 [2024-11-20 16:06:59.789193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.697 [2024-11-20 16:06:59.789286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.697 [2024-11-20 16:06:59.789308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.697 [2024-11-20 16:06:59.794227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.697 [2024-11-20 16:06:59.794300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.697 [2024-11-20 16:06:59.794323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.697 [2024-11-20 16:06:59.799252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.697 [2024-11-20 16:06:59.799325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.697 [2024-11-20 16:06:59.799347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:01.697 [2024-11-20 16:06:59.804313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.697 [2024-11-20 16:06:59.804531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.697 [2024-11-20 16:06:59.804554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:01.697 [2024-11-20 16:06:59.809458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.697 [2024-11-20 16:06:59.809551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.697 [2024-11-20 16:06:59.809574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:01.697 [2024-11-20 16:06:59.814503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x18785b0) with pdu=0x200016efef90 00:19:01.697 5922.50 IOPS, 740.31 MiB/s [2024-11-20T16:06:59.947Z] [2024-11-20 16:06:59.816256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.697 [2024-11-20 16:06:59.816295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:01.697 00:19:01.697 Latency(us) 00:19:01.697 [2024-11-20T16:06:59.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.697 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:01.697 nvme0n1 : 2.00 5921.60 740.20 0.00 0.00 2696.00 1541.59 9115.46 00:19:01.697 [2024-11-20T16:06:59.947Z] =================================================================================================================== 00:19:01.697 [2024-11-20T16:06:59.947Z] Total : 5921.60 740.20 0.00 0.00 2696.00 1541.59 9115.46 00:19:01.697 { 00:19:01.697 "results": [ 00:19:01.697 { 00:19:01.697 "job": "nvme0n1", 00:19:01.697 "core_mask": "0x2", 00:19:01.697 "workload": "randwrite", 00:19:01.697 "status": "finished", 00:19:01.697 "queue_depth": 16, 00:19:01.697 "io_size": 131072, 00:19:01.697 "runtime": 2.004187, 00:19:01.697 "iops": 5921.603123860199, 00:19:01.697 "mibps": 740.2003904825249, 00:19:01.697 "io_failed": 0, 00:19:01.697 "io_timeout": 0, 00:19:01.697 "avg_latency_us": 2695.997440941263, 00:19:01.697 "min_latency_us": 1541.5854545454545, 00:19:01.697 "max_latency_us": 9115.461818181819 00:19:01.697 } 00:19:01.697 ], 00:19:01.697 "core_count": 1 00:19:01.697 } 00:19:01.697 16:06:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:01.697 16:06:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:01.697 16:06:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:01.697 16:06:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:01.697 | .driver_specific 00:19:01.697 | .nvme_error 00:19:01.697 | .status_code 00:19:01.697 | .command_transient_transport_error' 00:19:01.956 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 383 > 0 )) 00:19:01.956 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80962 00:19:01.956 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80962 ']' 00:19:01.956 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80962 00:19:01.956 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:01.956 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.956 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80962 00:19:01.956 killing process with pid 80962 00:19:01.956 Received shutdown signal, test time was about 2.000000 seconds 00:19:01.956 00:19:01.956 Latency(us) 00:19:01.956 [2024-11-20T16:07:00.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.956 [2024-11-20T16:07:00.206Z] =================================================================================================================== 00:19:01.956 [2024-11-20T16:07:00.206Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:01.956 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:01.956 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:01.956 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80962' 00:19:01.956 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80962 00:19:01.956 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80962 00:19:02.216 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80761 00:19:02.216 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 80761 ']' 00:19:02.216 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 80761 00:19:02.216 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:19:02.216 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.216 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80761 00:19:02.216 killing process with pid 80761 00:19:02.216 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:02.216 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:02.216 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80761' 00:19:02.216 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 80761 00:19:02.216 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 80761 00:19:02.474 ************************************ 00:19:02.474 END TEST nvmf_digest_error 00:19:02.474 ************************************ 00:19:02.474 00:19:02.474 real 0m18.539s 00:19:02.474 user 0m36.515s 00:19:02.474 sys 0m4.671s 00:19:02.474 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.474 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:02.474 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:02.474 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:02.474 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:02.474 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:02.733 rmmod nvme_tcp 00:19:02.733 rmmod nvme_fabrics 00:19:02.733 rmmod nvme_keyring 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 80761 ']' 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 80761 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 80761 ']' 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 80761 00:19:02.733 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80761) - No such process 00:19:02.733 Process with pid 80761 is not found 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 80761 is not found' 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:02.733 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:02.734 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:02.992 16:07:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:02.992 16:07:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:02.992 16:07:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.992 16:07:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.992 16:07:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.992 16:07:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:19:02.992 00:19:02.992 real 0m37.339s 00:19:02.992 user 1m11.412s 00:19:02.992 sys 0m9.536s 00:19:02.992 16:07:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.992 ************************************ 00:19:02.992 END TEST nvmf_digest 00:19:02.992 16:07:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:02.992 ************************************ 00:19:02.992 16:07:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:19:02.992 16:07:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:19:02.993 16:07:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:02.993 16:07:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:02.993 16:07:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.993 16:07:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.993 ************************************ 00:19:02.993 START TEST nvmf_host_multipath 00:19:02.993 ************************************ 00:19:02.993 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:02.993 * Looking for test storage... 00:19:02.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:02.993 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:02.993 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:19:02.993 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:03.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.252 --rc genhtml_branch_coverage=1 00:19:03.252 --rc genhtml_function_coverage=1 00:19:03.252 --rc genhtml_legend=1 00:19:03.252 --rc geninfo_all_blocks=1 00:19:03.252 --rc geninfo_unexecuted_blocks=1 00:19:03.252 00:19:03.252 ' 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:03.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.252 --rc genhtml_branch_coverage=1 00:19:03.252 --rc genhtml_function_coverage=1 00:19:03.252 --rc genhtml_legend=1 00:19:03.252 --rc geninfo_all_blocks=1 00:19:03.252 --rc geninfo_unexecuted_blocks=1 00:19:03.252 00:19:03.252 ' 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:03.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.252 --rc genhtml_branch_coverage=1 00:19:03.252 --rc genhtml_function_coverage=1 00:19:03.252 --rc genhtml_legend=1 00:19:03.252 --rc geninfo_all_blocks=1 00:19:03.252 --rc geninfo_unexecuted_blocks=1 00:19:03.252 00:19:03.252 ' 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:03.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.252 --rc genhtml_branch_coverage=1 00:19:03.252 --rc genhtml_function_coverage=1 00:19:03.252 --rc genhtml_legend=1 00:19:03.252 --rc geninfo_all_blocks=1 00:19:03.252 --rc geninfo_unexecuted_blocks=1 00:19:03.252 00:19:03.252 ' 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:19:03.252 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:03.253 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:03.253 Cannot find device "nvmf_init_br" 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:03.253 Cannot find device "nvmf_init_br2" 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:03.253 Cannot find device "nvmf_tgt_br" 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:03.253 Cannot find device "nvmf_tgt_br2" 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:03.253 Cannot find device "nvmf_init_br" 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:03.253 Cannot find device "nvmf_init_br2" 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:03.253 Cannot find device "nvmf_tgt_br" 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:03.253 Cannot find device "nvmf_tgt_br2" 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:03.253 Cannot find device "nvmf_br" 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:03.253 Cannot find device "nvmf_init_if" 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:03.253 Cannot find device "nvmf_init_if2" 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:03.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:19:03.253 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:03.253 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.254 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:19:03.254 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:03.254 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:03.254 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:03.254 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:03.254 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:03.513 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:03.513 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:19:03.513 00:19:03.513 --- 10.0.0.3 ping statistics --- 00:19:03.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.513 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:03.513 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:03.513 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:19:03.513 00:19:03.513 --- 10.0.0.4 ping statistics --- 00:19:03.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.513 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:03.513 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:03.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:03.513 00:19:03.513 --- 10.0.0.1 ping statistics --- 00:19:03.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.514 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:03.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:19:03.514 00:19:03.514 --- 10.0.0.2 ping statistics --- 00:19:03.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.514 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=81284 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 81284 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81284 ']' 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.514 16:07:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:03.773 [2024-11-20 16:07:01.800071] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:19:03.773 [2024-11-20 16:07:01.800365] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.773 [2024-11-20 16:07:01.952878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:03.773 [2024-11-20 16:07:02.011519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.773 [2024-11-20 16:07:02.011859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.773 [2024-11-20 16:07:02.012125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.773 [2024-11-20 16:07:02.012349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.773 [2024-11-20 16:07:02.012474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.773 [2024-11-20 16:07:02.013875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.773 [2024-11-20 16:07:02.013881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.031 [2024-11-20 16:07:02.070884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:04.597 16:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.597 16:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:19:04.597 16:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:04.597 16:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.597 16:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:04.856 16:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.856 16:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81284 00:19:04.856 16:07:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:05.113 [2024-11-20 16:07:03.151885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.113 16:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:05.370 Malloc0 00:19:05.370 16:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:05.628 16:07:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:05.886 16:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:06.173 [2024-11-20 16:07:04.305414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:06.173 16:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:06.431 [2024-11-20 16:07:04.549531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:06.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.431 16:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81340 00:19:06.431 16:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:06.431 16:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:06.431 16:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81340 /var/tmp/bdevperf.sock 00:19:06.431 16:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 81340 ']' 00:19:06.431 16:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.431 16:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.431 16:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.431 16:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.431 16:07:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:07.804 16:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.804 16:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:19:07.804 16:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:07.804 16:07:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:08.062 Nvme0n1 00:19:08.062 16:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:08.320 Nvme0n1 00:19:08.577 16:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:08.577 16:07:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:19:09.514 16:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:19:09.514 16:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:09.772 16:07:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:10.031 16:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:19:10.031 16:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81284 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:10.031 16:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81385 00:19:10.031 16:07:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:16.617 Attaching 4 probes... 00:19:16.617 @path[10.0.0.3, 4421]: 17520 00:19:16.617 @path[10.0.0.3, 4421]: 17981 00:19:16.617 @path[10.0.0.3, 4421]: 17871 00:19:16.617 @path[10.0.0.3, 4421]: 17901 00:19:16.617 @path[10.0.0.3, 4421]: 17668 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81385 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:16.617 16:07:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:16.875 16:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:19:16.875 16:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81504 00:19:16.875 16:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:16.875 16:07:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81284 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:23.435 Attaching 4 probes... 00:19:23.435 @path[10.0.0.3, 4420]: 17683 00:19:23.435 @path[10.0.0.3, 4420]: 17997 00:19:23.435 @path[10.0.0.3, 4420]: 18056 00:19:23.435 @path[10.0.0.3, 4420]: 18164 00:19:23.435 @path[10.0.0.3, 4420]: 18167 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81504 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:19:23.435 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:23.694 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:23.994 16:07:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:19:23.994 16:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81621 00:19:23.994 16:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81284 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:23.994 16:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:30.600 Attaching 4 probes... 00:19:30.600 @path[10.0.0.3, 4421]: 13398 00:19:30.600 @path[10.0.0.3, 4421]: 17968 00:19:30.600 @path[10.0.0.3, 4421]: 17772 00:19:30.600 @path[10.0.0.3, 4421]: 17762 00:19:30.600 @path[10.0.0.3, 4421]: 17834 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81621 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:30.600 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:30.859 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:19:30.859 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81729 00:19:30.859 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:30.859 16:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81284 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:37.569 16:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:19:37.569 16:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:37.569 Attaching 4 probes... 00:19:37.569 00:19:37.569 00:19:37.569 00:19:37.569 00:19:37.569 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81729 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:37.569 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:37.827 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:19:37.827 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81847 00:19:37.827 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:37.827 16:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81284 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:44.434 16:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:44.434 16:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:44.434 16:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:44.434 16:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:44.434 Attaching 4 probes... 00:19:44.434 @path[10.0.0.3, 4421]: 16971 00:19:44.434 @path[10.0.0.3, 4421]: 17286 00:19:44.434 @path[10.0.0.3, 4421]: 17548 00:19:44.434 @path[10.0.0.3, 4421]: 17322 00:19:44.434 @path[10.0.0.3, 4421]: 17524 00:19:44.434 16:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:44.434 16:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:44.434 16:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:44.434 16:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:44.434 16:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:44.434 16:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:44.434 16:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81847 00:19:44.434 16:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:44.434 16:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:44.434 16:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:45.811 16:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:45.811 16:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81971 00:19:45.811 16:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81284 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:45.811 16:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:52.385 Attaching 4 probes... 00:19:52.385 @path[10.0.0.3, 4420]: 16742 00:19:52.385 @path[10.0.0.3, 4420]: 17145 00:19:52.385 @path[10.0.0.3, 4420]: 16868 00:19:52.385 @path[10.0.0.3, 4420]: 17140 00:19:52.385 @path[10.0.0.3, 4420]: 16880 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81971 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:52.385 16:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:52.385 [2024-11-20 16:07:50.240130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:52.385 16:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:52.385 16:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:58.949 16:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:58.949 16:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82145 00:19:58.949 16:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81284 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:58.949 16:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:05.520 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:05.520 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:05.520 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:05.520 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:05.520 Attaching 4 probes... 00:20:05.520 @path[10.0.0.3, 4421]: 17104 00:20:05.521 @path[10.0.0.3, 4421]: 17236 00:20:05.521 @path[10.0.0.3, 4421]: 17261 00:20:05.521 @path[10.0.0.3, 4421]: 17169 00:20:05.521 @path[10.0.0.3, 4421]: 17048 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82145 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81340 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81340 ']' 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81340 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81340 00:20:05.521 killing process with pid 81340 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81340' 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81340 00:20:05.521 16:08:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81340 00:20:05.521 { 00:20:05.521 "results": [ 00:20:05.521 { 00:20:05.521 "job": "Nvme0n1", 00:20:05.521 "core_mask": "0x4", 00:20:05.521 "workload": "verify", 00:20:05.521 "status": "terminated", 00:20:05.521 "verify_range": { 00:20:05.521 "start": 0, 00:20:05.521 "length": 16384 00:20:05.521 }, 00:20:05.521 "queue_depth": 128, 00:20:05.521 "io_size": 4096, 00:20:05.521 "runtime": 56.181184, 00:20:05.521 "iops": 7455.271857567117, 00:20:05.521 "mibps": 29.12215569362155, 00:20:05.521 "io_failed": 0, 00:20:05.521 "io_timeout": 0, 00:20:05.521 "avg_latency_us": 17136.474392836073, 00:20:05.521 "min_latency_us": 292.30545454545455, 00:20:05.521 "max_latency_us": 7046430.72 00:20:05.521 } 00:20:05.521 ], 00:20:05.521 "core_count": 1 00:20:05.521 } 00:20:05.521 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81340 00:20:05.521 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:05.521 [2024-11-20 16:07:04.607748] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:20:05.521 [2024-11-20 16:07:04.607850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81340 ] 00:20:05.521 [2024-11-20 16:07:04.786654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.521 [2024-11-20 16:07:04.861116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.521 [2024-11-20 16:07:04.915274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:05.521 Running I/O for 90 seconds... 00:20:05.521 6677.00 IOPS, 26.08 MiB/s [2024-11-20T16:08:03.771Z] 7772.50 IOPS, 30.36 MiB/s [2024-11-20T16:08:03.771Z] 8189.67 IOPS, 31.99 MiB/s [2024-11-20T16:08:03.771Z] 8388.25 IOPS, 32.77 MiB/s [2024-11-20T16:08:03.771Z] 8499.40 IOPS, 33.20 MiB/s [2024-11-20T16:08:03.771Z] 8576.17 IOPS, 33.50 MiB/s [2024-11-20T16:08:03.771Z] 8611.57 IOPS, 33.64 MiB/s [2024-11-20T16:08:03.771Z] 8644.12 IOPS, 33.77 MiB/s [2024-11-20T16:08:03.771Z] [2024-11-20 16:07:15.090183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.521 [2024-11-20 16:07:15.090269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.521 [2024-11-20 16:07:15.090354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.521 [2024-11-20 16:07:15.090394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.521 [2024-11-20 16:07:15.090432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.521 [2024-11-20 16:07:15.090470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.521 [2024-11-20 16:07:15.090508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.521 [2024-11-20 16:07:15.090545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.521 [2024-11-20 16:07:15.090582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-11-20 16:07:15.090619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-11-20 16:07:15.090688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-11-20 16:07:15.090730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-11-20 16:07:15.090768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-11-20 16:07:15.090804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-11-20 16:07:15.090859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-11-20 16:07:15.090896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-11-20 16:07:15.090934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-11-20 16:07:15.090971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.090993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-11-20 16:07:15.091008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:05.521 [2024-11-20 16:07:15.091030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.521 [2024-11-20 16:07:15.091046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.091545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.091591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.091630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.091678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.091715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.091753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.091790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.091841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.091880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.091917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.091955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.091976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.091992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.092029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.092066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.092103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.092150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.092190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.092228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.092266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.092304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.522 [2024-11-20 16:07:15.092341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.092378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.092415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.092452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.092489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.522 [2024-11-20 16:07:15.092527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:05.522 [2024-11-20 16:07:15.092548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.092563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.092585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.092608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.092631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.092646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.092668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.092684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.092706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.092722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.092743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.092759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.092781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.092796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.092833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-11-20 16:07:15.092852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.092874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-11-20 16:07:15.092890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.092913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-11-20 16:07:15.092929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.092951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-11-20 16:07:15.092966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.092988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-11-20 16:07:15.093003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-11-20 16:07:15.093041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-11-20 16:07:15.093088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-11-20 16:07:15.093129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.093971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.093993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.523 [2024-11-20 16:07:15.094009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.094030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-11-20 16:07:15.094046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:05.523 [2024-11-20 16:07:15.094076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.523 [2024-11-20 16:07:15.094094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.094582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.094599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.096112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:15.096162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:15.096200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:15.096237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:15.096275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:15.096312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:15.096351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:15.096390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:15.096580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:15.096630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:15.096686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.096727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.096765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.096802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.096862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.096900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.096936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.096974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:15.096996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.524 [2024-11-20 16:07:15.097011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:05.524 8674.00 IOPS, 33.88 MiB/s [2024-11-20T16:08:03.774Z] 8702.60 IOPS, 33.99 MiB/s [2024-11-20T16:08:03.774Z] 8728.91 IOPS, 34.10 MiB/s [2024-11-20T16:08:03.774Z] 8754.17 IOPS, 34.20 MiB/s [2024-11-20T16:08:03.774Z] 8779.85 IOPS, 34.30 MiB/s [2024-11-20T16:08:03.774Z] 8801.86 IOPS, 34.38 MiB/s [2024-11-20T16:08:03.774Z] 8818.00 IOPS, 34.45 MiB/s [2024-11-20T16:08:03.774Z] [2024-11-20 16:07:21.727986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:21.728049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:21.728125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:21.728146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:21.728170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:21.728186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:21.728236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:21.728253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:21.728275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.524 [2024-11-20 16:07:21.728290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:05.524 [2024-11-20 16:07:21.728311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.728326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.728363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.728399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.728963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.728978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.525 [2024-11-20 16:07:21.729015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:05.525 [2024-11-20 16:07:21.729565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.525 [2024-11-20 16:07:21.729580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.729602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.729617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.729638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.729653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.729684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.729700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.729722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.729737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.729758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.729773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.729794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.729821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.729847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.729862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.729884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.729898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.729920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.729936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.729959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.729975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.729996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.730643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.730681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.730718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.730755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.730791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.730843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.730881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.526 [2024-11-20 16:07:21.730918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.730976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.730991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.731013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.731028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:05.526 [2024-11-20 16:07:21.731050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.526 [2024-11-20 16:07:21.731065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.527 [2024-11-20 16:07:21.731270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.527 [2024-11-20 16:07:21.731307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.527 [2024-11-20 16:07:21.731343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.527 [2024-11-20 16:07:21.731380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.527 [2024-11-20 16:07:21.731416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.527 [2024-11-20 16:07:21.731453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.527 [2024-11-20 16:07:21.731489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.527 [2024-11-20 16:07:21.731526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:10776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.731975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.731989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.732011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.732025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.732055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.732071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.732094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.732109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.732131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.732146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.732167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.732182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.732204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.732218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.732240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.732254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.732276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.732291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.732313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.732328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.733151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.733181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.733215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.733233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.733263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.527 [2024-11-20 16:07:21.733279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.733307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.527 [2024-11-20 16:07:21.733323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.733380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.527 [2024-11-20 16:07:21.733399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:05.527 [2024-11-20 16:07:21.733438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.527 [2024-11-20 16:07:21.733453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.733482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.733498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.733526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.733541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.733570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.733585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.733614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.733630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.733676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.733697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.733726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.733742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.733771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.733786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.733826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.733844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.733873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.733889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.733918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.733933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.733961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.733986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.734016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.734032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:21.734061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:21.734076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:05.528 8273.12 IOPS, 32.32 MiB/s [2024-11-20T16:08:03.778Z] 8305.12 IOPS, 32.44 MiB/s [2024-11-20T16:08:03.778Z] 8340.61 IOPS, 32.58 MiB/s [2024-11-20T16:08:03.778Z] 8369.63 IOPS, 32.69 MiB/s [2024-11-20T16:08:03.778Z] 8396.55 IOPS, 32.80 MiB/s [2024-11-20T16:08:03.778Z] 8421.38 IOPS, 32.90 MiB/s [2024-11-20T16:08:03.778Z] 8441.59 IOPS, 32.97 MiB/s [2024-11-20T16:08:03.778Z] [2024-11-20 16:07:28.944747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.944845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.944908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.944931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.944955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.944971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.944993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.945009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.945047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.945084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.945122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.945159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.528 [2024-11-20 16:07:28.945196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.528 [2024-11-20 16:07:28.945276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.528 [2024-11-20 16:07:28.945313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.528 [2024-11-20 16:07:28.945365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.528 [2024-11-20 16:07:28.945404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.528 [2024-11-20 16:07:28.945440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.528 [2024-11-20 16:07:28.945477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.528 [2024-11-20 16:07:28.945515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.945708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.945749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.945789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.945844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.945882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.945934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.528 [2024-11-20 16:07:28.945972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:05.528 [2024-11-20 16:07:28.945994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.946009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.529 [2024-11-20 16:07:28.946959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.946986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:05.529 [2024-11-20 16:07:28.947486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.529 [2024-11-20 16:07:28.947502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.947524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.947539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.947561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.947577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.947600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.947615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.947638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.947654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.947676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.947692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.947714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.947730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.947752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.947767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.947789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.947805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.947850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.947868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.947890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.947905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.947927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.947943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.947965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.947980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.530 [2024-11-20 16:07:28.948523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.948561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.948598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.948636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.948673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.948711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.948749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.948793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.530 [2024-11-20 16:07:28.948848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:05.530 [2024-11-20 16:07:28.948871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.948886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.948908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.948924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.948946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.948962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.948984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.948999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.949020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.949036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.949057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.949073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.949094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.949109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.949131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.949147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.949168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.949183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.949205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.949221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.949243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.949258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.949288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.949305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.949327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.949353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.949378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.949394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.949416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.949432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.531 [2024-11-20 16:07:28.950209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:28.950262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:28.950308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:28.950353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:28.950398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:28.950443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:28.950488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:28.950533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:28.950613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:28.950658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:28.950703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:28.950748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:28.950777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:28.950793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:05.531 8160.83 IOPS, 31.88 MiB/s [2024-11-20T16:08:03.781Z] 7820.79 IOPS, 30.55 MiB/s [2024-11-20T16:08:03.781Z] 7507.96 IOPS, 29.33 MiB/s [2024-11-20T16:08:03.781Z] 7219.19 IOPS, 28.20 MiB/s [2024-11-20T16:08:03.781Z] 6951.81 IOPS, 27.16 MiB/s [2024-11-20T16:08:03.781Z] 6703.54 IOPS, 26.19 MiB/s [2024-11-20T16:08:03.781Z] 6472.38 IOPS, 25.28 MiB/s [2024-11-20T16:08:03.781Z] 6477.13 IOPS, 25.30 MiB/s [2024-11-20T16:08:03.781Z] 6544.06 IOPS, 25.56 MiB/s [2024-11-20T16:08:03.781Z] 6612.31 IOPS, 25.83 MiB/s [2024-11-20T16:08:03.781Z] 6676.18 IOPS, 26.08 MiB/s [2024-11-20T16:08:03.781Z] 6736.29 IOPS, 26.31 MiB/s [2024-11-20T16:08:03.781Z] 6793.66 IOPS, 26.54 MiB/s [2024-11-20T16:08:03.781Z] [2024-11-20 16:07:42.603134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:42.603223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:42.603284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:42.603305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:42.603329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:42.603345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:42.603366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:42.603381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:42.603403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:42.603418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:42.603440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:42.603455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:42.603503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:42.603520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:42.603542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:42.603557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:05.531 [2024-11-20 16:07:42.603578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.531 [2024-11-20 16:07:42.603592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.603613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.603628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.603649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.603664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.603685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.603700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.603721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.603736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.603767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.603781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.603802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.603833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.603857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.603872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.603894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.603909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.603932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.603948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.603970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.603996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.604558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.604589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.604618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.604648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.604677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.604708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.604737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.532 [2024-11-20 16:07:42.604765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.604971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.604986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.605000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.532 [2024-11-20 16:07:42.605015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.532 [2024-11-20 16:07:42.605029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.605531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.605985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.605999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.606023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.533 [2024-11-20 16:07:42.606037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.606052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.606073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.606090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.606104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.606119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.606133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.606149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.606163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.606178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.606192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.606208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.606222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.606237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.533 [2024-11-20 16:07:42.606250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.533 [2024-11-20 16:07:42.606266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.534 [2024-11-20 16:07:42.606279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.534 [2024-11-20 16:07:42.606318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.534 [2024-11-20 16:07:42.606347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.534 [2024-11-20 16:07:42.606376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.534 [2024-11-20 16:07:42.606405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.534 [2024-11-20 16:07:42.606434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.534 [2024-11-20 16:07:42.606471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.534 [2024-11-20 16:07:42.606501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:05.534 [2024-11-20 16:07:42.606530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.606974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.606990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.534 [2024-11-20 16:07:42.607003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.607018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x913290 is same with the state(6) to be set 00:20:05.534 [2024-11-20 16:07:42.607035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.534 [2024-11-20 16:07:42.607045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.534 [2024-11-20 16:07:42.607055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128776 len:8 PRP1 0x0 PRP2 0x0 00:20:05.534 [2024-11-20 16:07:42.607069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.607083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.534 [2024-11-20 16:07:42.607093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.534 [2024-11-20 16:07:42.607109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129232 len:8 PRP1 0x0 PRP2 0x0 00:20:05.534 [2024-11-20 16:07:42.607128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.607141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.534 [2024-11-20 16:07:42.607151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.534 [2024-11-20 16:07:42.607161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129240 len:8 PRP1 0x0 PRP2 0x0 00:20:05.534 [2024-11-20 16:07:42.607175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.607188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.534 [2024-11-20 16:07:42.607198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.534 [2024-11-20 16:07:42.607208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129248 len:8 PRP1 0x0 PRP2 0x0 00:20:05.534 [2024-11-20 16:07:42.607221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.607235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.534 [2024-11-20 16:07:42.607253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.534 [2024-11-20 16:07:42.607264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129256 len:8 PRP1 0x0 PRP2 0x0 00:20:05.534 [2024-11-20 16:07:42.607278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.607291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.534 [2024-11-20 16:07:42.607302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.534 [2024-11-20 16:07:42.607312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129264 len:8 PRP1 0x0 PRP2 0x0 00:20:05.534 [2024-11-20 16:07:42.607325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.534 [2024-11-20 16:07:42.607339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.534 [2024-11-20 16:07:42.607349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.535 [2024-11-20 16:07:42.607359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129272 len:8 PRP1 0x0 PRP2 0x0 00:20:05.535 [2024-11-20 16:07:42.607372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.535 [2024-11-20 16:07:42.607386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.535 [2024-11-20 16:07:42.607396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.535 [2024-11-20 16:07:42.607406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129280 len:8 PRP1 0x0 PRP2 0x0 00:20:05.535 [2024-11-20 16:07:42.607419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.535 [2024-11-20 16:07:42.607433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.535 [2024-11-20 16:07:42.607442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.535 [2024-11-20 16:07:42.607452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129288 len:8 PRP1 0x0 PRP2 0x0 00:20:05.535 [2024-11-20 16:07:42.607465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.535 [2024-11-20 16:07:42.607478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.535 [2024-11-20 16:07:42.607488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.535 [2024-11-20 16:07:42.607503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129296 len:8 PRP1 0x0 PRP2 0x0 00:20:05.535 [2024-11-20 16:07:42.607517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.535 [2024-11-20 16:07:42.607530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.535 [2024-11-20 16:07:42.607540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.535 [2024-11-20 16:07:42.607550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129304 len:8 PRP1 0x0 PRP2 0x0 00:20:05.535 [2024-11-20 16:07:42.607564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.535 [2024-11-20 16:07:42.607577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.535 [2024-11-20 16:07:42.607587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.535 [2024-11-20 16:07:42.607598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129312 len:8 PRP1 0x0 PRP2 0x0 00:20:05.535 [2024-11-20 16:07:42.607611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.535 [2024-11-20 16:07:42.607631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.535 [2024-11-20 16:07:42.607658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.535 [2024-11-20 16:07:42.607668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129320 len:8 PRP1 0x0 PRP2 0x0 00:20:05.535 [2024-11-20 16:07:42.607689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.535 [2024-11-20 16:07:42.607702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.535 [2024-11-20 16:07:42.607712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.535 [2024-11-20 16:07:42.607722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129328 len:8 PRP1 0x0 PRP2 0x0 00:20:05.535 [2024-11-20 16:07:42.607735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.535 [2024-11-20 16:07:42.607751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.535 [2024-11-20 16:07:42.607761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.535 [2024-11-20 16:07:42.607772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129336 len:8 PRP1 0x0 PRP2 0x0 00:20:05.535 [2024-11-20 16:07:42.607785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.535 [2024-11-20 16:07:42.607798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.535 [2024-11-20 16:07:42.607818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.535 [2024-11-20 16:07:42.607831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129344 len:8 PRP1 0x0 PRP2 0x0 00:20:05.535 [2024-11-20 16:07:42.607845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.535 [2024-11-20 16:07:42.607858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.535 [2024-11-20 16:07:42.607868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.535 [2024-11-20 16:07:42.607878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129352 len:8 PRP1 0x0 PRP2 0x0 00:20:05.535 [2024-11-20 16:07:42.607891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.535 [2024-11-20 16:07:42.609196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:05.535 [2024-11-20 16:07:42.609283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.535 [2024-11-20 16:07:42.609308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.535 [2024-11-20 16:07:42.609353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8841d0 (9): Bad file descriptor 00:20:05.535 [2024-11-20 16:07:42.609792] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.535 [2024-11-20 16:07:42.609840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8841d0 with addr=10.0.0.3, port=4421 00:20:05.535 [2024-11-20 16:07:42.609860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8841d0 is same with the state(6) to be set 00:20:05.535 [2024-11-20 16:07:42.609938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8841d0 (9): Bad file descriptor 00:20:05.535 [2024-11-20 16:07:42.609976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:05.535 [2024-11-20 16:07:42.610006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:05.535 [2024-11-20 16:07:42.610022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:05.535 [2024-11-20 16:07:42.610037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:05.535 [2024-11-20 16:07:42.610052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:05.535 6843.64 IOPS, 26.73 MiB/s [2024-11-20T16:08:03.785Z] 6889.59 IOPS, 26.91 MiB/s [2024-11-20T16:08:03.785Z] 6928.08 IOPS, 27.06 MiB/s [2024-11-20T16:08:03.785Z] 6971.36 IOPS, 27.23 MiB/s [2024-11-20T16:08:03.785Z] 7008.27 IOPS, 27.38 MiB/s [2024-11-20T16:08:03.785Z] 7047.29 IOPS, 27.53 MiB/s [2024-11-20T16:08:03.785Z] 7080.26 IOPS, 27.66 MiB/s [2024-11-20T16:08:03.785Z] 7113.37 IOPS, 27.79 MiB/s [2024-11-20T16:08:03.785Z] 7145.70 IOPS, 27.91 MiB/s [2024-11-20T16:08:03.785Z] 7176.24 IOPS, 28.03 MiB/s [2024-11-20T16:08:03.785Z] [2024-11-20 16:07:52.665823] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:05.535 7206.30 IOPS, 28.15 MiB/s [2024-11-20T16:08:03.785Z] 7235.45 IOPS, 28.26 MiB/s [2024-11-20T16:08:03.785Z] 7263.21 IOPS, 28.37 MiB/s [2024-11-20T16:08:03.785Z] 7290.16 IOPS, 28.48 MiB/s [2024-11-20T16:08:03.785Z] 7316.52 IOPS, 28.58 MiB/s [2024-11-20T16:08:03.785Z] 7341.76 IOPS, 28.68 MiB/s [2024-11-20T16:08:03.785Z] 7366.65 IOPS, 28.78 MiB/s [2024-11-20T16:08:03.785Z] 7390.75 IOPS, 28.87 MiB/s [2024-11-20T16:08:03.785Z] 7413.24 IOPS, 28.96 MiB/s [2024-11-20T16:08:03.785Z] 7433.49 IOPS, 29.04 MiB/s [2024-11-20T16:08:03.785Z] 7452.75 IOPS, 29.11 MiB/s [2024-11-20T16:08:03.785Z] Received shutdown signal, test time was about 56.182046 seconds 00:20:05.535 00:20:05.535 Latency(us) 00:20:05.535 [2024-11-20T16:08:03.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.535 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:05.535 Verification LBA range: start 0x0 length 0x4000 00:20:05.535 Nvme0n1 : 56.18 7455.27 29.12 0.00 0.00 17136.47 292.31 7046430.72 00:20:05.535 [2024-11-20T16:08:03.785Z] =================================================================================================================== 00:20:05.535 [2024-11-20T16:08:03.785Z] Total : 7455.27 29.12 0.00 0.00 17136.47 292.31 7046430.72 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:05.535 rmmod nvme_tcp 00:20:05.535 rmmod nvme_fabrics 00:20:05.535 rmmod nvme_keyring 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 81284 ']' 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 81284 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 81284 ']' 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 81284 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.535 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81284 00:20:05.536 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.536 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.536 killing process with pid 81284 00:20:05.536 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81284' 00:20:05.536 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 81284 00:20:05.536 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 81284 00:20:05.536 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:05.536 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:05.536 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:05.536 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:20:05.536 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:20:05.536 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:05.536 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:20:05.793 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:05.793 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:05.793 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:05.793 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:05.793 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:20:05.794 00:20:05.794 real 1m2.891s 00:20:05.794 user 2m55.605s 00:20:05.794 sys 0m18.006s 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.794 16:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:05.794 ************************************ 00:20:05.794 END TEST nvmf_host_multipath 00:20:05.794 ************************************ 00:20:05.794 16:08:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:05.794 16:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:05.794 16:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.794 16:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.794 ************************************ 00:20:05.794 START TEST nvmf_timeout 00:20:05.794 ************************************ 00:20:05.794 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:20:06.053 * Looking for test storage... 00:20:06.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:06.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.053 --rc genhtml_branch_coverage=1 00:20:06.053 --rc genhtml_function_coverage=1 00:20:06.053 --rc genhtml_legend=1 00:20:06.053 --rc geninfo_all_blocks=1 00:20:06.053 --rc geninfo_unexecuted_blocks=1 00:20:06.053 00:20:06.053 ' 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:06.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.053 --rc genhtml_branch_coverage=1 00:20:06.053 --rc genhtml_function_coverage=1 00:20:06.053 --rc genhtml_legend=1 00:20:06.053 --rc geninfo_all_blocks=1 00:20:06.053 --rc geninfo_unexecuted_blocks=1 00:20:06.053 00:20:06.053 ' 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:06.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.053 --rc genhtml_branch_coverage=1 00:20:06.053 --rc genhtml_function_coverage=1 00:20:06.053 --rc genhtml_legend=1 00:20:06.053 --rc geninfo_all_blocks=1 00:20:06.053 --rc geninfo_unexecuted_blocks=1 00:20:06.053 00:20:06.053 ' 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:06.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.053 --rc genhtml_branch_coverage=1 00:20:06.053 --rc genhtml_function_coverage=1 00:20:06.053 --rc genhtml_legend=1 00:20:06.053 --rc geninfo_all_blocks=1 00:20:06.053 --rc geninfo_unexecuted_blocks=1 00:20:06.053 00:20:06.053 ' 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:06.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:06.053 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:06.054 Cannot find device "nvmf_init_br" 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:06.054 Cannot find device "nvmf_init_br2" 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:06.054 Cannot find device "nvmf_tgt_br" 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.054 Cannot find device "nvmf_tgt_br2" 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:06.054 Cannot find device "nvmf_init_br" 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:20:06.054 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:06.312 Cannot find device "nvmf_init_br2" 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:06.312 Cannot find device "nvmf_tgt_br" 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:06.312 Cannot find device "nvmf_tgt_br2" 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:06.312 Cannot find device "nvmf_br" 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:06.312 Cannot find device "nvmf_init_if" 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:06.312 Cannot find device "nvmf_init_if2" 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:06.312 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:06.572 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:06.572 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:20:06.572 00:20:06.572 --- 10.0.0.3 ping statistics --- 00:20:06.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.572 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:06.572 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:06.572 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:20:06.572 00:20:06.572 --- 10.0.0.4 ping statistics --- 00:20:06.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.572 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:06.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:06.572 00:20:06.572 --- 10.0.0.1 ping statistics --- 00:20:06.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.572 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:06.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:20:06.572 00:20:06.572 --- 10.0.0.2 ping statistics --- 00:20:06.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.572 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=82515 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 82515 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82515 ']' 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.572 16:08:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:06.572 [2024-11-20 16:08:04.762668] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:20:06.572 [2024-11-20 16:08:04.762795] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.831 [2024-11-20 16:08:04.913344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:06.831 [2024-11-20 16:08:04.977832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.831 [2024-11-20 16:08:04.977934] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.831 [2024-11-20 16:08:04.977947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.831 [2024-11-20 16:08:04.977954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.831 [2024-11-20 16:08:04.977962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.831 [2024-11-20 16:08:04.979241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.831 [2024-11-20 16:08:04.979254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.831 [2024-11-20 16:08:05.038367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:07.088 16:08:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.088 16:08:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:07.088 16:08:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.088 16:08:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:07.089 16:08:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:07.089 16:08:05 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.089 16:08:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:07.089 16:08:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:07.346 [2024-11-20 16:08:05.443662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.346 16:08:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:07.667 Malloc0 00:20:07.667 16:08:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:07.940 16:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:08.506 16:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:08.506 [2024-11-20 16:08:06.726210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:08.506 16:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82557 00:20:08.506 16:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:08.506 16:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82557 /var/tmp/bdevperf.sock 00:20:08.506 16:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82557 ']' 00:20:08.506 16:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.506 16:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.506 16:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.506 16:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.506 16:08:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:08.764 [2024-11-20 16:08:06.809526] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:20:08.764 [2024-11-20 16:08:06.809633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82557 ] 00:20:08.764 [2024-11-20 16:08:06.963519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.022 [2024-11-20 16:08:07.039401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.022 [2024-11-20 16:08:07.100445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:09.022 16:08:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.022 16:08:07 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:09.022 16:08:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:09.280 16:08:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:09.845 NVMe0n1 00:20:09.845 16:08:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82573 00:20:09.845 16:08:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.845 16:08:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:20:09.845 Running I/O for 10 seconds... 00:20:10.779 16:08:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:11.040 6827.00 IOPS, 26.67 MiB/s [2024-11-20T16:08:09.290Z] [2024-11-20 16:08:09.115402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.040 [2024-11-20 16:08:09.115480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.040 [2024-11-20 16:08:09.115762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.040 [2024-11-20 16:08:09.115772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.115783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.115793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.115805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.115814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.115836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.115847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.115876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.115886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.115898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.115908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.115919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.115929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.115940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.115949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.115961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.115970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.115982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.041 [2024-11-20 16:08:09.116627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.041 [2024-11-20 16:08:09.116636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.116985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.116994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.042 [2024-11-20 16:08:09.117540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.042 [2024-11-20 16:08:09.117549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.117981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.117992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.043 [2024-11-20 16:08:09.118316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:11.043 [2024-11-20 16:08:09.118337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f3f60 is same with the state(6) to be set 00:20:11.043 [2024-11-20 16:08:09.118360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:11.043 [2024-11-20 16:08:09.118368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:11.043 [2024-11-20 16:08:09.118377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63696 len:8 PRP1 0x0 PRP2 0x0 00:20:11.043 [2024-11-20 16:08:09.118386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:11.043 [2024-11-20 16:08:09.118698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:11.043 [2024-11-20 16:08:09.118776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x986e50 (9): Bad file descriptor 00:20:11.043 [2024-11-20 16:08:09.118906] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:11.043 [2024-11-20 16:08:09.118929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x986e50 with addr=10.0.0.3, port=4420 00:20:11.043 [2024-11-20 16:08:09.118948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x986e50 is same with the state(6) to be set 00:20:11.044 [2024-11-20 16:08:09.118966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x986e50 (9): Bad file descriptor 00:20:11.044 [2024-11-20 16:08:09.118997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:11.044 [2024-11-20 16:08:09.119014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:11.044 [2024-11-20 16:08:09.119030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:11.044 [2024-11-20 16:08:09.119041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:11.044 [2024-11-20 16:08:09.119052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:11.044 16:08:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:20:12.909 3917.50 IOPS, 15.30 MiB/s [2024-11-20T16:08:11.159Z] 2611.67 IOPS, 10.20 MiB/s [2024-11-20T16:08:11.159Z] [2024-11-20 16:08:11.119328] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:12.909 [2024-11-20 16:08:11.119423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x986e50 with addr=10.0.0.3, port=4420 00:20:12.909 [2024-11-20 16:08:11.119445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x986e50 is same with the state(6) to be set 00:20:12.909 [2024-11-20 16:08:11.119473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x986e50 (9): Bad file descriptor 00:20:12.909 [2024-11-20 16:08:11.119493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:12.909 [2024-11-20 16:08:11.119503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:12.909 [2024-11-20 16:08:11.119516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:12.909 [2024-11-20 16:08:11.119528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:12.909 [2024-11-20 16:08:11.119540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:12.909 16:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:20:12.909 16:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:12.909 16:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:13.476 16:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:20:13.476 16:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:20:13.476 16:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:13.476 16:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:13.476 16:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:20:13.476 16:08:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:20:15.110 1958.75 IOPS, 7.65 MiB/s [2024-11-20T16:08:13.360Z] 1567.00 IOPS, 6.12 MiB/s [2024-11-20T16:08:13.360Z] [2024-11-20 16:08:13.119765] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:15.110 [2024-11-20 16:08:13.119868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x986e50 with addr=10.0.0.3, port=4420 00:20:15.110 [2024-11-20 16:08:13.119886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x986e50 is same with the state(6) to be set 00:20:15.110 [2024-11-20 16:08:13.119915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x986e50 (9): Bad file descriptor 00:20:15.110 [2024-11-20 16:08:13.119936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:15.110 [2024-11-20 16:08:13.119946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:15.110 [2024-11-20 16:08:13.119959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:15.110 [2024-11-20 16:08:13.119971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:15.110 [2024-11-20 16:08:13.119984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:16.979 1305.83 IOPS, 5.10 MiB/s [2024-11-20T16:08:15.229Z] 1119.29 IOPS, 4.37 MiB/s [2024-11-20T16:08:15.229Z] [2024-11-20 16:08:15.120127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:16.979 [2024-11-20 16:08:15.120180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:16.979 [2024-11-20 16:08:15.120193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:16.979 [2024-11-20 16:08:15.120204] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:20:16.979 [2024-11-20 16:08:15.120217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:17.912 979.38 IOPS, 3.83 MiB/s 00:20:17.912 Latency(us) 00:20:17.912 [2024-11-20T16:08:16.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.912 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:17.912 Verification LBA range: start 0x0 length 0x4000 00:20:17.912 NVMe0n1 : 8.14 962.54 3.76 15.72 0.00 130658.71 4587.52 7015926.69 00:20:17.912 [2024-11-20T16:08:16.162Z] =================================================================================================================== 00:20:17.912 [2024-11-20T16:08:16.162Z] Total : 962.54 3.76 15.72 0.00 130658.71 4587.52 7015926.69 00:20:17.912 { 00:20:17.912 "results": [ 00:20:17.912 { 00:20:17.912 "job": "NVMe0n1", 00:20:17.912 "core_mask": "0x4", 00:20:17.912 "workload": "verify", 00:20:17.912 "status": "finished", 00:20:17.912 "verify_range": { 00:20:17.912 "start": 0, 00:20:17.912 "length": 16384 00:20:17.912 }, 00:20:17.912 "queue_depth": 128, 00:20:17.912 "io_size": 4096, 00:20:17.912 "runtime": 8.139915, 00:20:17.912 "iops": 962.5407636320526, 00:20:17.912 "mibps": 3.7599248579377056, 00:20:17.912 "io_failed": 128, 00:20:17.912 "io_timeout": 0, 00:20:17.912 "avg_latency_us": 130658.70862922835, 00:20:17.912 "min_latency_us": 4587.52, 00:20:17.912 "max_latency_us": 7015926.69090909 00:20:17.912 } 00:20:17.912 ], 00:20:17.912 "core_count": 1 00:20:17.912 } 00:20:18.477 16:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:20:18.477 16:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:18.477 16:08:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:20:19.042 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:20:19.042 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:20:19.042 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:20:19.042 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 82573 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82557 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82557 ']' 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82557 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82557 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:19.299 killing process with pid 82557 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82557' 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82557 00:20:19.299 Received shutdown signal, test time was about 9.422124 seconds 00:20:19.299 00:20:19.299 Latency(us) 00:20:19.299 [2024-11-20T16:08:17.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.299 [2024-11-20T16:08:17.549Z] =================================================================================================================== 00:20:19.299 [2024-11-20T16:08:17.549Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:19.299 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82557 00:20:19.557 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:19.814 [2024-11-20 16:08:17.874790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:19.814 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82696 00:20:19.814 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:20:19.814 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82696 /var/tmp/bdevperf.sock 00:20:19.814 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82696 ']' 00:20:19.814 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.814 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.814 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.814 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.814 16:08:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:19.814 [2024-11-20 16:08:17.950498] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:20:19.814 [2024-11-20 16:08:17.950593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82696 ] 00:20:20.072 [2024-11-20 16:08:18.102836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.072 [2024-11-20 16:08:18.159726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.072 [2024-11-20 16:08:18.214419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:21.021 16:08:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.021 16:08:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:21.021 16:08:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:21.021 16:08:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:20:21.589 NVMe0n1 00:20:21.589 16:08:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:21.589 16:08:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82725 00:20:21.589 16:08:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:20:21.589 Running I/O for 10 seconds... 00:20:22.525 16:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:22.786 6933.00 IOPS, 27.08 MiB/s [2024-11-20T16:08:21.036Z] [2024-11-20 16:08:20.916713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.786 [2024-11-20 16:08:20.916787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.786 [2024-11-20 16:08:20.916826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.786 [2024-11-20 16:08:20.916840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.786 [2024-11-20 16:08:20.916852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.786 [2024-11-20 16:08:20.916862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.786 [2024-11-20 16:08:20.916874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.786 [2024-11-20 16:08:20.916883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.786 [2024-11-20 16:08:20.916894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.786 [2024-11-20 16:08:20.916903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.786 [2024-11-20 16:08:20.916914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.786 [2024-11-20 16:08:20.916923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.786 [2024-11-20 16:08:20.916935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.786 [2024-11-20 16:08:20.916944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.786 [2024-11-20 16:08:20.916955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.786 [2024-11-20 16:08:20.916964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.786 [2024-11-20 16:08:20.916975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.786 [2024-11-20 16:08:20.916984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.786 [2024-11-20 16:08:20.916995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.786 [2024-11-20 16:08:20.917004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.786 [2024-11-20 16:08:20.917015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.787 [2024-11-20 16:08:20.917637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.787 [2024-11-20 16:08:20.917648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.917987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.917998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.788 [2024-11-20 16:08:20.918260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.788 [2024-11-20 16:08:20.918269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.789 [2024-11-20 16:08:20.918964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.789 [2024-11-20 16:08:20.918973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.918984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.790 [2024-11-20 16:08:20.918993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.790 [2024-11-20 16:08:20.919013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.790 [2024-11-20 16:08:20.919033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.790 [2024-11-20 16:08:20.919053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.790 [2024-11-20 16:08:20.919072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.790 [2024-11-20 16:08:20.919092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.790 [2024-11-20 16:08:20.919111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.790 [2024-11-20 16:08:20.919421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:22.790 [2024-11-20 16:08:20.919441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc97f60 is same with the state(6) to be set 00:20:22.790 [2024-11-20 16:08:20.919463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:22.790 [2024-11-20 16:08:20.919471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:22.790 [2024-11-20 16:08:20.919479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66712 len:8 PRP1 0x0 PRP2 0x0 00:20:22.790 [2024-11-20 16:08:20.919489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.790 [2024-11-20 16:08:20.919655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.790 [2024-11-20 16:08:20.919677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.790 [2024-11-20 16:08:20.919695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.790 [2024-11-20 16:08:20.919720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.790 [2024-11-20 16:08:20.919734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2ae50 is same with the state(6) to be set 00:20:22.790 [2024-11-20 16:08:20.919964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:22.790 [2024-11-20 16:08:20.919994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2ae50 (9): Bad file descriptor 00:20:22.790 [2024-11-20 16:08:20.920097] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.791 [2024-11-20 16:08:20.920127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2ae50 with addr=10.0.0.3, port=4420 00:20:22.791 [2024-11-20 16:08:20.920139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2ae50 is same with the state(6) to be set 00:20:22.791 [2024-11-20 16:08:20.920157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2ae50 (9): Bad file descriptor 00:20:22.791 [2024-11-20 16:08:20.920172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:22.791 [2024-11-20 16:08:20.920182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:22.791 [2024-11-20 16:08:20.920193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:22.791 [2024-11-20 16:08:20.920203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:22.791 [2024-11-20 16:08:20.920213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:22.791 16:08:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:20:23.728 4106.00 IOPS, 16.04 MiB/s [2024-11-20T16:08:21.978Z] [2024-11-20 16:08:21.920338] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:23.728 [2024-11-20 16:08:21.920420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2ae50 with addr=10.0.0.3, port=4420 00:20:23.728 [2024-11-20 16:08:21.920436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2ae50 is same with the state(6) to be set 00:20:23.728 [2024-11-20 16:08:21.920462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2ae50 (9): Bad file descriptor 00:20:23.728 [2024-11-20 16:08:21.920492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:23.728 [2024-11-20 16:08:21.920502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:23.728 [2024-11-20 16:08:21.920513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:23.728 [2024-11-20 16:08:21.920524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:23.728 [2024-11-20 16:08:21.920535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:23.728 16:08:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:23.987 [2024-11-20 16:08:22.205529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:23.987 16:08:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 82725 00:20:24.815 2737.33 IOPS, 10.69 MiB/s [2024-11-20T16:08:23.065Z] [2024-11-20 16:08:22.933320] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:20:26.688 2053.00 IOPS, 8.02 MiB/s [2024-11-20T16:08:25.872Z] 3137.80 IOPS, 12.26 MiB/s [2024-11-20T16:08:26.807Z] 4174.83 IOPS, 16.31 MiB/s [2024-11-20T16:08:28.185Z] 4906.86 IOPS, 19.17 MiB/s [2024-11-20T16:08:28.751Z] 5447.88 IOPS, 21.28 MiB/s [2024-11-20T16:08:30.124Z] 5880.78 IOPS, 22.97 MiB/s [2024-11-20T16:08:30.124Z] 6232.60 IOPS, 24.35 MiB/s 00:20:31.874 Latency(us) 00:20:31.874 [2024-11-20T16:08:30.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.874 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:31.874 Verification LBA range: start 0x0 length 0x4000 00:20:31.874 NVMe0n1 : 10.01 6238.33 24.37 0.00 0.00 20475.68 1050.07 3019898.88 00:20:31.874 [2024-11-20T16:08:30.124Z] =================================================================================================================== 00:20:31.874 [2024-11-20T16:08:30.124Z] Total : 6238.33 24.37 0.00 0.00 20475.68 1050.07 3019898.88 00:20:31.874 { 00:20:31.874 "results": [ 00:20:31.874 { 00:20:31.874 "job": "NVMe0n1", 00:20:31.874 "core_mask": "0x4", 00:20:31.874 "workload": "verify", 00:20:31.874 "status": "finished", 00:20:31.874 "verify_range": { 00:20:31.874 "start": 0, 00:20:31.874 "length": 16384 00:20:31.874 }, 00:20:31.874 "queue_depth": 128, 00:20:31.874 "io_size": 4096, 00:20:31.874 "runtime": 10.007964, 00:20:31.874 "iops": 6238.331792560405, 00:20:31.874 "mibps": 24.368483564689082, 00:20:31.874 "io_failed": 0, 00:20:31.874 "io_timeout": 0, 00:20:31.874 "avg_latency_us": 20475.677497127832, 00:20:31.874 "min_latency_us": 1050.0654545454545, 00:20:31.874 "max_latency_us": 3019898.88 00:20:31.874 } 00:20:31.874 ], 00:20:31.874 "core_count": 1 00:20:31.874 } 00:20:31.874 16:08:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82830 00:20:31.874 16:08:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:31.874 16:08:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:20:31.874 Running I/O for 10 seconds... 00:20:32.810 16:08:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:33.073 6933.00 IOPS, 27.08 MiB/s [2024-11-20T16:08:31.323Z] [2024-11-20 16:08:31.074466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.074999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.073 [2024-11-20 16:08:31.075209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8ef230 is same with the state(6) to be set 00:20:33.074 [2024-11-20 16:08:31.075512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.075985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.075996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.076005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.076016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.076025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.076037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.076045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.076056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.074 [2024-11-20 16:08:31.076065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.074 [2024-11-20 16:08:31.076076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:65296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.075 [2024-11-20 16:08:31.076882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:65312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.075 [2024-11-20 16:08:31.076892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.076903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.076912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.076923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.076932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.076943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.076952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.076963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.076973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.076984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.076993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:65416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:65432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:65512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:65520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:65552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:65560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:65616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.076 [2024-11-20 16:08:31.077715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.076 [2024-11-20 16:08:31.077730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.077 [2024-11-20 16:08:31.077739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.077750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:65640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.077 [2024-11-20 16:08:31.077759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.077770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.077 [2024-11-20 16:08:31.077779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.077790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.077 [2024-11-20 16:08:31.077799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.077819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.077 [2024-11-20 16:08:31.077831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.077842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.077 [2024-11-20 16:08:31.077851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.077862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.077 [2024-11-20 16:08:31.077871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.077883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.077892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.077903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.077912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.077923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.077932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.077943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.077952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.077962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.077971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.077982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.077999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.078010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.078019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.078030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.078039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.078050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.078059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.078074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.078084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.078095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.078104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.078115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.078124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.078135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.078144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.078154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.078163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.078174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:33.077 [2024-11-20 16:08:31.078183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.078194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.077 [2024-11-20 16:08:31.078203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.078213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc990e0 is same with the state(6) to be set 00:20:33.077 [2024-11-20 16:08:31.078225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:33.077 [2024-11-20 16:08:31.078233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:33.077 [2024-11-20 16:08:31.078241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65696 len:8 PRP1 0x0 PRP2 0x0 00:20:33.077 [2024-11-20 16:08:31.078251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.077 [2024-11-20 16:08:31.078517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:33.077 [2024-11-20 16:08:31.078604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2ae50 (9): Bad file descriptor 00:20:33.077 [2024-11-20 16:08:31.078734] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:33.077 [2024-11-20 16:08:31.078762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2ae50 with addr=10.0.0.3, port=4420 00:20:33.077 [2024-11-20 16:08:31.078773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2ae50 is same with the state(6) to be set 00:20:33.077 [2024-11-20 16:08:31.078791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2ae50 (9): Bad file descriptor 00:20:33.077 [2024-11-20 16:08:31.078827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:33.077 [2024-11-20 16:08:31.078839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:33.077 [2024-11-20 16:08:31.078849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:33.077 [2024-11-20 16:08:31.078860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:33.077 [2024-11-20 16:08:31.078871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:33.077 16:08:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:20:33.944 4050.00 IOPS, 15.82 MiB/s [2024-11-20T16:08:32.194Z] [2024-11-20 16:08:32.079022] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:33.944 [2024-11-20 16:08:32.079104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2ae50 with addr=10.0.0.3, port=4420 00:20:33.944 [2024-11-20 16:08:32.079121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2ae50 is same with the state(6) to be set 00:20:33.944 [2024-11-20 16:08:32.079150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2ae50 (9): Bad file descriptor 00:20:33.944 [2024-11-20 16:08:32.079169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:33.944 [2024-11-20 16:08:32.079179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:33.944 [2024-11-20 16:08:32.079189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:33.944 [2024-11-20 16:08:32.079201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:33.944 [2024-11-20 16:08:32.079212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:34.881 2700.00 IOPS, 10.55 MiB/s [2024-11-20T16:08:33.131Z] [2024-11-20 16:08:33.079370] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:34.881 [2024-11-20 16:08:33.079450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2ae50 with addr=10.0.0.3, port=4420 00:20:34.881 [2024-11-20 16:08:33.079465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2ae50 is same with the state(6) to be set 00:20:34.881 [2024-11-20 16:08:33.079494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2ae50 (9): Bad file descriptor 00:20:34.881 [2024-11-20 16:08:33.079513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:34.881 [2024-11-20 16:08:33.079525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:34.881 [2024-11-20 16:08:33.079535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:34.881 [2024-11-20 16:08:33.079547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:34.881 [2024-11-20 16:08:33.079559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:36.074 2025.00 IOPS, 7.91 MiB/s [2024-11-20T16:08:34.324Z] [2024-11-20 16:08:34.083263] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:36.074 [2024-11-20 16:08:34.083327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2ae50 with addr=10.0.0.3, port=4420 00:20:36.074 [2024-11-20 16:08:34.083342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2ae50 is same with the state(6) to be set 00:20:36.074 [2024-11-20 16:08:34.083595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2ae50 (9): Bad file descriptor 00:20:36.074 [2024-11-20 16:08:34.083859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:20:36.074 [2024-11-20 16:08:34.083881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:20:36.074 [2024-11-20 16:08:34.083893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:20:36.074 [2024-11-20 16:08:34.083904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:20:36.074 [2024-11-20 16:08:34.083916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:20:36.074 16:08:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:36.333 [2024-11-20 16:08:34.418086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:36.333 16:08:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 82830 00:20:36.899 1620.00 IOPS, 6.33 MiB/s [2024-11-20T16:08:35.149Z] [2024-11-20 16:08:35.109059] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:20:38.846 2601.00 IOPS, 10.16 MiB/s [2024-11-20T16:08:38.031Z] 3584.43 IOPS, 14.00 MiB/s [2024-11-20T16:08:38.967Z] 4328.38 IOPS, 16.91 MiB/s [2024-11-20T16:08:40.341Z] 4907.00 IOPS, 19.17 MiB/s [2024-11-20T16:08:40.341Z] 5363.50 IOPS, 20.95 MiB/s 00:20:42.091 Latency(us) 00:20:42.091 [2024-11-20T16:08:40.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.091 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:42.091 Verification LBA range: start 0x0 length 0x4000 00:20:42.091 NVMe0n1 : 10.01 5368.55 20.97 3652.57 0.00 14153.21 692.60 3019898.88 00:20:42.091 [2024-11-20T16:08:40.341Z] =================================================================================================================== 00:20:42.091 [2024-11-20T16:08:40.341Z] Total : 5368.55 20.97 3652.57 0.00 14153.21 0.00 3019898.88 00:20:42.091 { 00:20:42.091 "results": [ 00:20:42.091 { 00:20:42.091 "job": "NVMe0n1", 00:20:42.091 "core_mask": "0x4", 00:20:42.091 "workload": "verify", 00:20:42.091 "status": "finished", 00:20:42.091 "verify_range": { 00:20:42.091 "start": 0, 00:20:42.091 "length": 16384 00:20:42.091 }, 00:20:42.091 "queue_depth": 128, 00:20:42.091 "io_size": 4096, 00:20:42.091 "runtime": 10.008853, 00:20:42.091 "iops": 5368.547225141582, 00:20:42.091 "mibps": 20.970887598209305, 00:20:42.091 "io_failed": 36558, 00:20:42.091 "io_timeout": 0, 00:20:42.091 "avg_latency_us": 14153.209853634864, 00:20:42.091 "min_latency_us": 692.5963636363637, 00:20:42.091 "max_latency_us": 3019898.88 00:20:42.091 } 00:20:42.091 ], 00:20:42.091 "core_count": 1 00:20:42.091 } 00:20:42.091 16:08:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82696 00:20:42.091 16:08:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82696 ']' 00:20:42.091 16:08:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82696 00:20:42.091 16:08:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:42.091 16:08:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.091 16:08:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82696 00:20:42.091 killing process with pid 82696 00:20:42.091 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.091 00:20:42.091 Latency(us) 00:20:42.091 [2024-11-20T16:08:40.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.091 [2024-11-20T16:08:40.341Z] =================================================================================================================== 00:20:42.091 [2024-11-20T16:08:40.341Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.091 16:08:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:42.091 16:08:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:42.091 16:08:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82696' 00:20:42.091 16:08:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82696 00:20:42.091 16:08:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82696 00:20:42.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.091 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82943 00:20:42.091 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:42.091 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82943 /var/tmp/bdevperf.sock 00:20:42.091 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 82943 ']' 00:20:42.091 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.091 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.091 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.091 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.091 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:42.091 [2024-11-20 16:08:40.222554] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:20:42.091 [2024-11-20 16:08:40.222662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82943 ] 00:20:42.349 [2024-11-20 16:08:40.373374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.349 [2024-11-20 16:08:40.436003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.349 [2024-11-20 16:08:40.492127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:42.349 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.349 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:20:42.349 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82947 00:20:42.349 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82943 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:42.349 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:42.608 16:08:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:43.173 NVMe0n1 00:20:43.173 16:08:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82989 00:20:43.173 16:08:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:43.173 16:08:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:43.173 Running I/O for 10 seconds... 00:20:44.109 16:08:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:44.370 14986.00 IOPS, 58.54 MiB/s [2024-11-20T16:08:42.620Z] [2024-11-20 16:08:42.479471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.479986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.479995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.370 [2024-11-20 16:08:42.480350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.370 [2024-11-20 16:08:42.480359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.480985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.480995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.481006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.481016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.481027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.481037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.481048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.481058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.481069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.481079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.481090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.481100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.481111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.481120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.481131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.481141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.481154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.481163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.481175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.371 [2024-11-20 16:08:42.481184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.371 [2024-11-20 16:08:42.481196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.481988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.481999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.482009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.482026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.482036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.482047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.482057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.482068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.372 [2024-11-20 16:08:42.482077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.372 [2024-11-20 16:08:42.482088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.373 [2024-11-20 16:08:42.482098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.373 [2024-11-20 16:08:42.482114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.373 [2024-11-20 16:08:42.482124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.373 [2024-11-20 16:08:42.482135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.373 [2024-11-20 16:08:42.482144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.373 [2024-11-20 16:08:42.482156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.373 [2024-11-20 16:08:42.482165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.373 [2024-11-20 16:08:42.482176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.373 [2024-11-20 16:08:42.482186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.373 [2024-11-20 16:08:42.482197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.373 [2024-11-20 16:08:42.482207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.373 [2024-11-20 16:08:42.482219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.373 [2024-11-20 16:08:42.482228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.373 [2024-11-20 16:08:42.482239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.373 [2024-11-20 16:08:42.482249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.373 [2024-11-20 16:08:42.482260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.373 [2024-11-20 16:08:42.482269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.373 [2024-11-20 16:08:42.482281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:44.373 [2024-11-20 16:08:42.482290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.373 [2024-11-20 16:08:42.482300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128e20 is same with the state(6) to be set 00:20:44.373 [2024-11-20 16:08:42.482312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:44.373 [2024-11-20 16:08:42.482321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:44.373 [2024-11-20 16:08:42.482331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88536 len:8 PRP1 0x0 PRP2 0x0 00:20:44.373 [2024-11-20 16:08:42.482341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:44.373 [2024-11-20 16:08:42.482662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:44.373 [2024-11-20 16:08:42.482742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bbe50 (9): Bad file descriptor 00:20:44.373 [2024-11-20 16:08:42.482861] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:44.373 [2024-11-20 16:08:42.482884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bbe50 with addr=10.0.0.3, port=4420 00:20:44.373 [2024-11-20 16:08:42.482895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bbe50 is same with the state(6) to be set 00:20:44.373 [2024-11-20 16:08:42.482914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bbe50 (9): Bad file descriptor 00:20:44.373 [2024-11-20 16:08:42.482930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:44.373 [2024-11-20 16:08:42.482941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:44.373 [2024-11-20 16:08:42.482952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:44.373 [2024-11-20 16:08:42.482969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:44.373 [2024-11-20 16:08:42.482980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:44.373 16:08:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82989 00:20:46.242 8699.50 IOPS, 33.98 MiB/s [2024-11-20T16:08:44.492Z] 5799.67 IOPS, 22.65 MiB/s [2024-11-20T16:08:44.492Z] [2024-11-20 16:08:44.483217] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.243 [2024-11-20 16:08:44.483294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bbe50 with addr=10.0.0.3, port=4420 00:20:46.243 [2024-11-20 16:08:44.483312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bbe50 is same with the state(6) to be set 00:20:46.243 [2024-11-20 16:08:44.483339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bbe50 (9): Bad file descriptor 00:20:46.243 [2024-11-20 16:08:44.483360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:46.243 [2024-11-20 16:08:44.483370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:46.243 [2024-11-20 16:08:44.483382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:46.243 [2024-11-20 16:08:44.483394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:46.243 [2024-11-20 16:08:44.483406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:48.113 4349.75 IOPS, 16.99 MiB/s [2024-11-20T16:08:46.626Z] 3479.80 IOPS, 13.59 MiB/s [2024-11-20T16:08:46.626Z] [2024-11-20 16:08:46.483711] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:48.376 [2024-11-20 16:08:46.483807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10bbe50 with addr=10.0.0.3, port=4420 00:20:48.376 [2024-11-20 16:08:46.483846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bbe50 is same with the state(6) to be set 00:20:48.376 [2024-11-20 16:08:46.483878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bbe50 (9): Bad file descriptor 00:20:48.376 [2024-11-20 16:08:46.483899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:48.376 [2024-11-20 16:08:46.483909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:48.376 [2024-11-20 16:08:46.483920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:48.376 [2024-11-20 16:08:46.483933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:48.376 [2024-11-20 16:08:46.483945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:20:50.258 2899.83 IOPS, 11.33 MiB/s [2024-11-20T16:08:48.508Z] 2485.57 IOPS, 9.71 MiB/s [2024-11-20T16:08:48.508Z] [2024-11-20 16:08:48.484053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:20:50.258 [2024-11-20 16:08:48.484394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:20:50.258 [2024-11-20 16:08:48.484419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:20:50.258 [2024-11-20 16:08:48.484431] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:20:50.258 [2024-11-20 16:08:48.484446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:20:51.452 2174.88 IOPS, 8.50 MiB/s 00:20:51.452 Latency(us) 00:20:51.452 [2024-11-20T16:08:49.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.452 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:51.452 NVMe0n1 : 8.17 2129.69 8.32 15.67 0.00 59614.37 8340.95 7015926.69 00:20:51.452 [2024-11-20T16:08:49.702Z] =================================================================================================================== 00:20:51.452 [2024-11-20T16:08:49.702Z] Total : 2129.69 8.32 15.67 0.00 59614.37 8340.95 7015926.69 00:20:51.452 { 00:20:51.452 "results": [ 00:20:51.452 { 00:20:51.452 "job": "NVMe0n1", 00:20:51.452 "core_mask": "0x4", 00:20:51.452 "workload": "randread", 00:20:51.452 "status": "finished", 00:20:51.452 "queue_depth": 128, 00:20:51.452 "io_size": 4096, 00:20:51.452 "runtime": 8.169734, 00:20:51.452 "iops": 2129.6899017764836, 00:20:51.452 "mibps": 8.319101178814389, 00:20:51.452 "io_failed": 128, 00:20:51.452 "io_timeout": 0, 00:20:51.452 "avg_latency_us": 59614.369946420324, 00:20:51.452 "min_latency_us": 8340.945454545454, 00:20:51.452 "max_latency_us": 7015926.69090909 00:20:51.452 } 00:20:51.452 ], 00:20:51.452 "core_count": 1 00:20:51.452 } 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:51.452 Attaching 5 probes... 00:20:51.452 1394.291049: reset bdev controller NVMe0 00:20:51.452 1394.428844: reconnect bdev controller NVMe0 00:20:51.452 3394.724469: reconnect delay bdev controller NVMe0 00:20:51.452 3394.748453: reconnect bdev controller NVMe0 00:20:51.452 5395.192706: reconnect delay bdev controller NVMe0 00:20:51.452 5395.222499: reconnect bdev controller NVMe0 00:20:51.452 7395.657906: reconnect delay bdev controller NVMe0 00:20:51.452 7395.690229: reconnect bdev controller NVMe0 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82947 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82943 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82943 ']' 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82943 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82943 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:51.452 killing process with pid 82943 00:20:51.452 Received shutdown signal, test time was about 8.234757 seconds 00:20:51.452 00:20:51.452 Latency(us) 00:20:51.452 [2024-11-20T16:08:49.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.452 [2024-11-20T16:08:49.702Z] =================================================================================================================== 00:20:51.452 [2024-11-20T16:08:49.702Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82943' 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82943 00:20:51.452 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82943 00:20:51.711 16:08:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:51.969 rmmod nvme_tcp 00:20:51.969 rmmod nvme_fabrics 00:20:51.969 rmmod nvme_keyring 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 82515 ']' 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 82515 00:20:51.969 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 82515 ']' 00:20:51.970 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 82515 00:20:51.970 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:20:51.970 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.970 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82515 00:20:51.970 killing process with pid 82515 00:20:51.970 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.970 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.970 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82515' 00:20:51.970 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 82515 00:20:51.970 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 82515 00:20:52.228 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:52.228 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:52.228 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:52.228 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:20:52.228 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:52.228 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:52.228 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:52.228 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:52.228 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:52.228 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:52.228 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:20:52.487 ************************************ 00:20:52.487 END TEST nvmf_timeout 00:20:52.487 ************************************ 00:20:52.487 00:20:52.487 real 0m46.635s 00:20:52.487 user 2m16.895s 00:20:52.487 sys 0m5.806s 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:52.487 00:20:52.487 real 5m13.110s 00:20:52.487 user 13m30.194s 00:20:52.487 sys 1m11.245s 00:20:52.487 ************************************ 00:20:52.487 END TEST nvmf_host 00:20:52.487 ************************************ 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.487 16:08:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.746 16:08:50 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:20:52.746 16:08:50 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:20:52.746 00:20:52.746 real 13m6.713s 00:20:52.746 user 31m29.927s 00:20:52.746 sys 3m12.956s 00:20:52.746 ************************************ 00:20:52.746 END TEST nvmf_tcp 00:20:52.746 ************************************ 00:20:52.746 16:08:50 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:52.746 16:08:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:52.746 16:08:50 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:20:52.746 16:08:50 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:52.746 16:08:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:52.746 16:08:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:52.746 16:08:50 -- common/autotest_common.sh@10 -- # set +x 00:20:52.746 ************************************ 00:20:52.746 START TEST nvmf_dif 00:20:52.746 ************************************ 00:20:52.746 16:08:50 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:52.746 * Looking for test storage... 00:20:52.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:52.746 16:08:50 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:52.746 16:08:50 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:52.746 16:08:50 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:20:52.746 16:08:50 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:52.746 16:08:50 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:20:52.746 16:08:50 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:52.746 16:08:50 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:52.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.746 --rc genhtml_branch_coverage=1 00:20:52.746 --rc genhtml_function_coverage=1 00:20:52.746 --rc genhtml_legend=1 00:20:52.746 --rc geninfo_all_blocks=1 00:20:52.746 --rc geninfo_unexecuted_blocks=1 00:20:52.746 00:20:52.746 ' 00:20:52.746 16:08:50 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:52.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.746 --rc genhtml_branch_coverage=1 00:20:52.746 --rc genhtml_function_coverage=1 00:20:52.746 --rc genhtml_legend=1 00:20:52.746 --rc geninfo_all_blocks=1 00:20:52.746 --rc geninfo_unexecuted_blocks=1 00:20:52.746 00:20:52.746 ' 00:20:52.746 16:08:50 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:52.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.746 --rc genhtml_branch_coverage=1 00:20:52.746 --rc genhtml_function_coverage=1 00:20:52.746 --rc genhtml_legend=1 00:20:52.746 --rc geninfo_all_blocks=1 00:20:52.746 --rc geninfo_unexecuted_blocks=1 00:20:52.746 00:20:52.746 ' 00:20:52.746 16:08:50 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:52.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:52.746 --rc genhtml_branch_coverage=1 00:20:52.746 --rc genhtml_function_coverage=1 00:20:52.746 --rc genhtml_legend=1 00:20:52.746 --rc geninfo_all_blocks=1 00:20:52.747 --rc geninfo_unexecuted_blocks=1 00:20:52.747 00:20:52.747 ' 00:20:52.747 16:08:50 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:52.747 16:08:50 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:52.747 16:08:50 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.747 16:08:50 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.747 16:08:50 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.747 16:08:50 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.747 16:08:50 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.747 16:08:50 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.747 16:08:50 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.747 16:08:50 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.747 16:08:50 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.747 16:08:50 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.005 16:08:50 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:53.006 16:08:50 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:20:53.006 16:08:50 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.006 16:08:50 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.006 16:08:50 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.006 16:08:50 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.006 16:08:50 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.006 16:08:50 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.006 16:08:50 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:53.006 16:08:50 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.006 16:08:50 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:53.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:53.006 16:08:51 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:53.006 16:08:51 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:53.006 16:08:51 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:53.006 16:08:51 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:53.006 16:08:51 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.006 16:08:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:53.006 16:08:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:53.006 Cannot find device "nvmf_init_br" 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:53.006 Cannot find device "nvmf_init_br2" 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:53.006 Cannot find device "nvmf_tgt_br" 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@164 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:53.006 Cannot find device "nvmf_tgt_br2" 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@165 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:53.006 Cannot find device "nvmf_init_br" 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@166 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:53.006 Cannot find device "nvmf_init_br2" 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@167 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:53.006 Cannot find device "nvmf_tgt_br" 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@168 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:53.006 Cannot find device "nvmf_tgt_br2" 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@169 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:53.006 Cannot find device "nvmf_br" 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@170 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:53.006 Cannot find device "nvmf_init_if" 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@171 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:53.006 Cannot find device "nvmf_init_if2" 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@172 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:53.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@173 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:53.006 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@174 -- # true 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:53.006 16:08:51 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:53.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:53.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:20:53.265 00:20:53.265 --- 10.0.0.3 ping statistics --- 00:20:53.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.265 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:20:53.265 16:08:51 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:53.265 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:53.265 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:20:53.265 00:20:53.265 --- 10.0.0.4 ping statistics --- 00:20:53.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.266 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:53.266 16:08:51 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:53.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:20:53.266 00:20:53.266 --- 10.0.0.1 ping statistics --- 00:20:53.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.266 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:53.266 16:08:51 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:53.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:20:53.266 00:20:53.266 --- 10.0.0.2 ping statistics --- 00:20:53.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.266 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:53.266 16:08:51 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.266 16:08:51 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:20:53.266 16:08:51 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:53.266 16:08:51 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:53.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:53.525 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:53.525 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:53.525 16:08:51 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.525 16:08:51 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:53.525 16:08:51 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:53.525 16:08:51 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.525 16:08:51 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:53.525 16:08:51 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:53.525 16:08:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:53.525 16:08:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:53.525 16:08:51 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.525 16:08:51 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.525 16:08:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:53.525 16:08:51 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=83482 00:20:53.525 16:08:51 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:53.525 16:08:51 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 83482 00:20:53.525 16:08:51 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 83482 ']' 00:20:53.525 16:08:51 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.525 16:08:51 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.525 16:08:51 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.525 16:08:51 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.525 16:08:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:53.784 [2024-11-20 16:08:51.813777] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:20:53.784 [2024-11-20 16:08:51.813879] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.784 [2024-11-20 16:08:51.965051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.042 [2024-11-20 16:08:52.036609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.042 [2024-11-20 16:08:52.036707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.042 [2024-11-20 16:08:52.036726] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.042 [2024-11-20 16:08:52.036738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.042 [2024-11-20 16:08:52.036748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.042 [2024-11-20 16:08:52.037245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.042 [2024-11-20 16:08:52.095657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:54.610 16:08:52 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.610 16:08:52 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:20:54.610 16:08:52 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:54.610 16:08:52 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.610 16:08:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:54.610 16:08:52 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.610 16:08:52 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:54.610 16:08:52 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:54.610 16:08:52 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.610 16:08:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:54.610 [2024-11-20 16:08:52.839100] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.610 16:08:52 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.610 16:08:52 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:54.610 16:08:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:54.610 16:08:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.610 16:08:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:54.610 ************************************ 00:20:54.610 START TEST fio_dif_1_default 00:20:54.610 ************************************ 00:20:54.610 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:20:54.610 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:54.610 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:54.610 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:54.610 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:54.610 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:54.610 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:54.610 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.610 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:54.869 bdev_null0 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:54.869 [2024-11-20 16:08:52.887212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:54.869 { 00:20:54.869 "params": { 00:20:54.869 "name": "Nvme$subsystem", 00:20:54.869 "trtype": "$TEST_TRANSPORT", 00:20:54.869 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:54.869 "adrfam": "ipv4", 00:20:54.869 "trsvcid": "$NVMF_PORT", 00:20:54.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:54.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:54.869 "hdgst": ${hdgst:-false}, 00:20:54.869 "ddgst": ${ddgst:-false} 00:20:54.869 }, 00:20:54.869 "method": "bdev_nvme_attach_controller" 00:20:54.869 } 00:20:54.869 EOF 00:20:54.869 )") 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:54.869 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:54.870 "params": { 00:20:54.870 "name": "Nvme0", 00:20:54.870 "trtype": "tcp", 00:20:54.870 "traddr": "10.0.0.3", 00:20:54.870 "adrfam": "ipv4", 00:20:54.870 "trsvcid": "4420", 00:20:54.870 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:54.870 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:54.870 "hdgst": false, 00:20:54.870 "ddgst": false 00:20:54.870 }, 00:20:54.870 "method": "bdev_nvme_attach_controller" 00:20:54.870 }' 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:54.870 16:08:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:55.129 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:55.129 fio-3.35 00:20:55.129 Starting 1 thread 00:21:07.332 00:21:07.332 filename0: (groupid=0, jobs=1): err= 0: pid=83549: Wed Nov 20 16:09:03 2024 00:21:07.332 read: IOPS=8238, BW=32.2MiB/s (33.7MB/s)(322MiB/10001msec) 00:21:07.332 slat (usec): min=4, max=4030, avg= 8.61, stdev=19.89 00:21:07.332 clat (usec): min=415, max=4505, avg=460.33, stdev=56.20 00:21:07.332 lat (usec): min=423, max=4515, avg=468.94, stdev=59.78 00:21:07.332 clat percentiles (usec): 00:21:07.332 | 1.00th=[ 429], 5.00th=[ 433], 10.00th=[ 437], 20.00th=[ 445], 00:21:07.332 | 30.00th=[ 449], 40.00th=[ 453], 50.00th=[ 457], 60.00th=[ 461], 00:21:07.332 | 70.00th=[ 469], 80.00th=[ 474], 90.00th=[ 482], 95.00th=[ 490], 00:21:07.332 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 578], 99.95th=[ 644], 00:21:07.332 | 99.99th=[ 4228] 00:21:07.332 bw ( KiB/s): min=31136, max=33216, per=100.00%, avg=32981.89, stdev=459.05, samples=19 00:21:07.332 iops : min= 7784, max= 8304, avg=8245.47, stdev=114.76, samples=19 00:21:07.332 lat (usec) : 500=98.62%, 750=1.35%, 1000=0.01% 00:21:07.332 lat (msec) : 4=0.01%, 10=0.02% 00:21:07.332 cpu : usr=85.13%, sys=12.98%, ctx=19, majf=0, minf=9 00:21:07.332 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:07.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.332 issued rwts: total=82396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.332 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:07.332 00:21:07.332 Run status group 0 (all jobs): 00:21:07.332 READ: bw=32.2MiB/s (33.7MB/s), 32.2MiB/s-32.2MiB/s (33.7MB/s-33.7MB/s), io=322MiB (337MB), run=10001-10001msec 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:07.332 ************************************ 00:21:07.332 END TEST fio_dif_1_default 00:21:07.332 ************************************ 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.332 00:21:07.332 real 0m11.097s 00:21:07.332 user 0m9.209s 00:21:07.332 sys 0m1.581s 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:21:07.332 16:09:03 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:21:07.332 16:09:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:07.332 16:09:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.332 16:09:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:07.332 ************************************ 00:21:07.332 START TEST fio_dif_1_multi_subsystems 00:21:07.332 ************************************ 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:21:07.332 16:09:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:07.332 bdev_null0 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:07.332 [2024-11-20 16:09:04.029986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:07.332 bdev_null1 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.332 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.333 { 00:21:07.333 "params": { 00:21:07.333 "name": "Nvme$subsystem", 00:21:07.333 "trtype": "$TEST_TRANSPORT", 00:21:07.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.333 "adrfam": "ipv4", 00:21:07.333 "trsvcid": "$NVMF_PORT", 00:21:07.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.333 "hdgst": ${hdgst:-false}, 00:21:07.333 "ddgst": ${ddgst:-false} 00:21:07.333 }, 00:21:07.333 "method": "bdev_nvme_attach_controller" 00:21:07.333 } 00:21:07.333 EOF 00:21:07.333 )") 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:07.333 { 00:21:07.333 "params": { 00:21:07.333 "name": "Nvme$subsystem", 00:21:07.333 "trtype": "$TEST_TRANSPORT", 00:21:07.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:07.333 "adrfam": "ipv4", 00:21:07.333 "trsvcid": "$NVMF_PORT", 00:21:07.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:07.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:07.333 "hdgst": ${hdgst:-false}, 00:21:07.333 "ddgst": ${ddgst:-false} 00:21:07.333 }, 00:21:07.333 "method": "bdev_nvme_attach_controller" 00:21:07.333 } 00:21:07.333 EOF 00:21:07.333 )") 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:07.333 "params": { 00:21:07.333 "name": "Nvme0", 00:21:07.333 "trtype": "tcp", 00:21:07.333 "traddr": "10.0.0.3", 00:21:07.333 "adrfam": "ipv4", 00:21:07.333 "trsvcid": "4420", 00:21:07.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:07.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:07.333 "hdgst": false, 00:21:07.333 "ddgst": false 00:21:07.333 }, 00:21:07.333 "method": "bdev_nvme_attach_controller" 00:21:07.333 },{ 00:21:07.333 "params": { 00:21:07.333 "name": "Nvme1", 00:21:07.333 "trtype": "tcp", 00:21:07.333 "traddr": "10.0.0.3", 00:21:07.333 "adrfam": "ipv4", 00:21:07.333 "trsvcid": "4420", 00:21:07.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.333 "hdgst": false, 00:21:07.333 "ddgst": false 00:21:07.333 }, 00:21:07.333 "method": "bdev_nvme_attach_controller" 00:21:07.333 }' 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:07.333 16:09:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:07.333 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:07.333 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:21:07.333 fio-3.35 00:21:07.333 Starting 2 threads 00:21:17.317 00:21:17.317 filename0: (groupid=0, jobs=1): err= 0: pid=83709: Wed Nov 20 16:09:14 2024 00:21:17.317 read: IOPS=4531, BW=17.7MiB/s (18.6MB/s)(177MiB/10001msec) 00:21:17.317 slat (usec): min=7, max=576, avg=14.12, stdev= 6.16 00:21:17.317 clat (usec): min=500, max=1859, avg=844.60, stdev=46.21 00:21:17.317 lat (usec): min=508, max=1894, avg=858.73, stdev=48.04 00:21:17.317 clat percentiles (usec): 00:21:17.317 | 1.00th=[ 742], 5.00th=[ 775], 10.00th=[ 799], 20.00th=[ 816], 00:21:17.317 | 30.00th=[ 824], 40.00th=[ 832], 50.00th=[ 840], 60.00th=[ 848], 00:21:17.317 | 70.00th=[ 857], 80.00th=[ 873], 90.00th=[ 889], 95.00th=[ 914], 00:21:17.317 | 99.00th=[ 979], 99.50th=[ 996], 99.90th=[ 1106], 99.95th=[ 1172], 00:21:17.317 | 99.99th=[ 1565] 00:21:17.317 bw ( KiB/s): min=16640, max=18432, per=49.99%, avg=18132.21, stdev=463.01, samples=19 00:21:17.317 iops : min= 4160, max= 4608, avg=4533.05, stdev=115.75, samples=19 00:21:17.317 lat (usec) : 750=2.10%, 1000=97.40% 00:21:17.317 lat (msec) : 2=0.49% 00:21:17.317 cpu : usr=89.09%, sys=9.27%, ctx=60, majf=0, minf=9 00:21:17.317 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.317 issued rwts: total=45324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.317 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:17.317 filename1: (groupid=0, jobs=1): err= 0: pid=83710: Wed Nov 20 16:09:14 2024 00:21:17.317 read: IOPS=4536, BW=17.7MiB/s (18.6MB/s)(177MiB/10001msec) 00:21:17.317 slat (nsec): min=7567, max=55795, avg=14354.94, stdev=4473.05 00:21:17.317 clat (usec): min=443, max=1764, avg=841.47, stdev=35.43 00:21:17.317 lat (usec): min=454, max=1798, avg=855.82, stdev=37.22 00:21:17.317 clat percentiles (usec): 00:21:17.317 | 1.00th=[ 783], 5.00th=[ 799], 10.00th=[ 807], 20.00th=[ 816], 00:21:17.317 | 30.00th=[ 824], 40.00th=[ 832], 50.00th=[ 840], 60.00th=[ 848], 00:21:17.317 | 70.00th=[ 857], 80.00th=[ 865], 90.00th=[ 881], 95.00th=[ 898], 00:21:17.317 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 996], 99.95th=[ 1012], 00:21:17.317 | 99.99th=[ 1074] 00:21:17.317 bw ( KiB/s): min=16640, max=18432, per=50.03%, avg=18149.05, stdev=445.10, samples=19 00:21:17.317 iops : min= 4160, max= 4608, avg=4537.26, stdev=111.28, samples=19 00:21:17.317 lat (usec) : 500=0.07%, 750=0.13%, 1000=99.73% 00:21:17.317 lat (msec) : 2=0.07% 00:21:17.317 cpu : usr=89.41%, sys=9.31%, ctx=5, majf=0, minf=0 00:21:17.317 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:17.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:17.317 issued rwts: total=45368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:17.317 latency : target=0, window=0, percentile=100.00%, depth=4 00:21:17.317 00:21:17.317 Run status group 0 (all jobs): 00:21:17.317 READ: bw=35.4MiB/s (37.1MB/s), 17.7MiB/s-17.7MiB/s (18.6MB/s-18.6MB/s), io=354MiB (371MB), run=10001-10001msec 00:21:17.317 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 ************************************ 00:21:17.318 END TEST fio_dif_1_multi_subsystems 00:21:17.318 ************************************ 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.318 00:21:17.318 real 0m11.192s 00:21:17.318 user 0m18.668s 00:21:17.318 sys 0m2.155s 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.318 16:09:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 16:09:15 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:21:17.318 16:09:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:17.318 16:09:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.318 16:09:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 ************************************ 00:21:17.318 START TEST fio_dif_rand_params 00:21:17.318 ************************************ 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 bdev_null0 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:17.318 [2024-11-20 16:09:15.275356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:17.318 { 00:21:17.318 "params": { 00:21:17.318 "name": "Nvme$subsystem", 00:21:17.318 "trtype": "$TEST_TRANSPORT", 00:21:17.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:17.318 "adrfam": "ipv4", 00:21:17.318 "trsvcid": "$NVMF_PORT", 00:21:17.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:17.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:17.318 "hdgst": ${hdgst:-false}, 00:21:17.318 "ddgst": ${ddgst:-false} 00:21:17.318 }, 00:21:17.318 "method": "bdev_nvme_attach_controller" 00:21:17.318 } 00:21:17.318 EOF 00:21:17.318 )") 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:17.318 "params": { 00:21:17.318 "name": "Nvme0", 00:21:17.318 "trtype": "tcp", 00:21:17.318 "traddr": "10.0.0.3", 00:21:17.318 "adrfam": "ipv4", 00:21:17.318 "trsvcid": "4420", 00:21:17.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:17.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:17.318 "hdgst": false, 00:21:17.318 "ddgst": false 00:21:17.318 }, 00:21:17.318 "method": "bdev_nvme_attach_controller" 00:21:17.318 }' 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:17.318 16:09:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:17.318 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:17.318 ... 00:21:17.318 fio-3.35 00:21:17.318 Starting 3 threads 00:21:23.940 00:21:23.940 filename0: (groupid=0, jobs=1): err= 0: pid=83866: Wed Nov 20 16:09:21 2024 00:21:23.940 read: IOPS=248, BW=31.0MiB/s (32.5MB/s)(155MiB/5006msec) 00:21:23.940 slat (nsec): min=6358, max=42716, avg=17625.90, stdev=5593.30 00:21:23.940 clat (usec): min=9292, max=13182, avg=12047.86, stdev=153.86 00:21:23.940 lat (usec): min=9325, max=13200, avg=12065.49, stdev=154.14 00:21:23.940 clat percentiles (usec): 00:21:23.940 | 1.00th=[11994], 5.00th=[11994], 10.00th=[11994], 20.00th=[11994], 00:21:23.940 | 30.00th=[11994], 40.00th=[11994], 50.00th=[11994], 60.00th=[12125], 00:21:23.940 | 70.00th=[12125], 80.00th=[12125], 90.00th=[12125], 95.00th=[12125], 00:21:23.940 | 99.00th=[12256], 99.50th=[12256], 99.90th=[13173], 99.95th=[13173], 00:21:23.940 | 99.99th=[13173] 00:21:23.940 bw ( KiB/s): min=31488, max=32256, per=33.30%, avg=31718.40, stdev=370.98, samples=10 00:21:23.940 iops : min= 246, max= 252, avg=247.80, stdev= 2.90, samples=10 00:21:23.940 lat (msec) : 10=0.24%, 20=99.76% 00:21:23.940 cpu : usr=91.01%, sys=8.19%, ctx=3, majf=0, minf=0 00:21:23.940 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.940 issued rwts: total=1242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.940 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:23.940 filename0: (groupid=0, jobs=1): err= 0: pid=83867: Wed Nov 20 16:09:21 2024 00:21:23.940 read: IOPS=248, BW=31.0MiB/s (32.5MB/s)(155MiB/5007msec) 00:21:23.940 slat (nsec): min=7841, max=63880, avg=13544.18, stdev=7759.77 00:21:23.940 clat (usec): min=11010, max=12273, avg=12055.55, stdev=70.34 00:21:23.940 lat (usec): min=11018, max=12303, avg=12069.09, stdev=71.77 00:21:23.940 clat percentiles (usec): 00:21:23.940 | 1.00th=[11994], 5.00th=[11994], 10.00th=[11994], 20.00th=[11994], 00:21:23.940 | 30.00th=[11994], 40.00th=[11994], 50.00th=[11994], 60.00th=[12125], 00:21:23.940 | 70.00th=[12125], 80.00th=[12125], 90.00th=[12125], 95.00th=[12125], 00:21:23.940 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12256], 99.95th=[12256], 00:21:23.940 | 99.99th=[12256] 00:21:23.940 bw ( KiB/s): min=31488, max=32256, per=33.30%, avg=31718.40, stdev=370.98, samples=10 00:21:23.940 iops : min= 246, max= 252, avg=247.80, stdev= 2.90, samples=10 00:21:23.940 lat (msec) : 20=100.00% 00:21:23.940 cpu : usr=91.27%, sys=8.03%, ctx=35, majf=0, minf=0 00:21:23.940 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.940 issued rwts: total=1242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.940 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:23.940 filename0: (groupid=0, jobs=1): err= 0: pid=83868: Wed Nov 20 16:09:21 2024 00:21:23.940 read: IOPS=248, BW=31.0MiB/s (32.5MB/s)(155MiB/5006msec) 00:21:23.940 slat (nsec): min=8305, max=36600, avg=16441.64, stdev=4830.59 00:21:23.940 clat (usec): min=9302, max=12713, avg=12050.61, stdev=147.13 00:21:23.940 lat (usec): min=9318, max=12738, avg=12067.06, stdev=147.40 00:21:23.940 clat percentiles (usec): 00:21:23.940 | 1.00th=[11994], 5.00th=[11994], 10.00th=[11994], 20.00th=[11994], 00:21:23.940 | 30.00th=[11994], 40.00th=[11994], 50.00th=[11994], 60.00th=[12125], 00:21:23.940 | 70.00th=[12125], 80.00th=[12125], 90.00th=[12125], 95.00th=[12125], 00:21:23.940 | 99.00th=[12256], 99.50th=[12256], 99.90th=[12649], 99.95th=[12649], 00:21:23.940 | 99.99th=[12649] 00:21:23.940 bw ( KiB/s): min=31488, max=32256, per=33.30%, avg=31718.40, stdev=370.98, samples=10 00:21:23.940 iops : min= 246, max= 252, avg=247.80, stdev= 2.90, samples=10 00:21:23.940 lat (msec) : 10=0.24%, 20=99.76% 00:21:23.940 cpu : usr=90.73%, sys=8.37%, ctx=36, majf=0, minf=0 00:21:23.940 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.940 issued rwts: total=1242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.940 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:23.940 00:21:23.940 Run status group 0 (all jobs): 00:21:23.940 READ: bw=93.0MiB/s (97.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=466MiB (488MB), run=5006-5007msec 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.940 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.940 bdev_null0 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.941 [2024-11-20 16:09:21.376861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.941 bdev_null1 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.941 bdev_null2 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.941 { 00:21:23.941 "params": { 00:21:23.941 "name": "Nvme$subsystem", 00:21:23.941 "trtype": "$TEST_TRANSPORT", 00:21:23.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.941 "adrfam": "ipv4", 00:21:23.941 "trsvcid": "$NVMF_PORT", 00:21:23.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.941 "hdgst": ${hdgst:-false}, 00:21:23.941 "ddgst": ${ddgst:-false} 00:21:23.941 }, 00:21:23.941 "method": "bdev_nvme_attach_controller" 00:21:23.941 } 00:21:23.941 EOF 00:21:23.941 )") 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:23.941 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.941 { 00:21:23.941 "params": { 00:21:23.941 "name": "Nvme$subsystem", 00:21:23.941 "trtype": "$TEST_TRANSPORT", 00:21:23.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.942 "adrfam": "ipv4", 00:21:23.942 "trsvcid": "$NVMF_PORT", 00:21:23.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.942 "hdgst": ${hdgst:-false}, 00:21:23.942 "ddgst": ${ddgst:-false} 00:21:23.942 }, 00:21:23.942 "method": "bdev_nvme_attach_controller" 00:21:23.942 } 00:21:23.942 EOF 00:21:23.942 )") 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:23.942 { 00:21:23.942 "params": { 00:21:23.942 "name": "Nvme$subsystem", 00:21:23.942 "trtype": "$TEST_TRANSPORT", 00:21:23.942 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:23.942 "adrfam": "ipv4", 00:21:23.942 "trsvcid": "$NVMF_PORT", 00:21:23.942 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:23.942 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:23.942 "hdgst": ${hdgst:-false}, 00:21:23.942 "ddgst": ${ddgst:-false} 00:21:23.942 }, 00:21:23.942 "method": "bdev_nvme_attach_controller" 00:21:23.942 } 00:21:23.942 EOF 00:21:23.942 )") 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:23.942 "params": { 00:21:23.942 "name": "Nvme0", 00:21:23.942 "trtype": "tcp", 00:21:23.942 "traddr": "10.0.0.3", 00:21:23.942 "adrfam": "ipv4", 00:21:23.942 "trsvcid": "4420", 00:21:23.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:23.942 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:23.942 "hdgst": false, 00:21:23.942 "ddgst": false 00:21:23.942 }, 00:21:23.942 "method": "bdev_nvme_attach_controller" 00:21:23.942 },{ 00:21:23.942 "params": { 00:21:23.942 "name": "Nvme1", 00:21:23.942 "trtype": "tcp", 00:21:23.942 "traddr": "10.0.0.3", 00:21:23.942 "adrfam": "ipv4", 00:21:23.942 "trsvcid": "4420", 00:21:23.942 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.942 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:23.942 "hdgst": false, 00:21:23.942 "ddgst": false 00:21:23.942 }, 00:21:23.942 "method": "bdev_nvme_attach_controller" 00:21:23.942 },{ 00:21:23.942 "params": { 00:21:23.942 "name": "Nvme2", 00:21:23.942 "trtype": "tcp", 00:21:23.942 "traddr": "10.0.0.3", 00:21:23.942 "adrfam": "ipv4", 00:21:23.942 "trsvcid": "4420", 00:21:23.942 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:23.942 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:23.942 "hdgst": false, 00:21:23.942 "ddgst": false 00:21:23.942 }, 00:21:23.942 "method": "bdev_nvme_attach_controller" 00:21:23.942 }' 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:23.942 16:09:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:23.942 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:23.942 ... 00:21:23.942 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:23.942 ... 00:21:23.942 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:21:23.942 ... 00:21:23.942 fio-3.35 00:21:23.942 Starting 24 threads 00:21:36.140 00:21:36.140 filename0: (groupid=0, jobs=1): err= 0: pid=83967: Wed Nov 20 16:09:32 2024 00:21:36.140 read: IOPS=227, BW=910KiB/s (932kB/s)(9108KiB/10012msec) 00:21:36.140 slat (usec): min=5, max=8037, avg=29.29, stdev=334.89 00:21:36.140 clat (msec): min=15, max=131, avg=70.23, stdev=21.61 00:21:36.140 lat (msec): min=15, max=131, avg=70.26, stdev=21.61 00:21:36.140 clat percentiles (msec): 00:21:36.140 | 1.00th=[ 24], 5.00th=[ 38], 10.00th=[ 47], 20.00th=[ 50], 00:21:36.140 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:21:36.140 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 102], 95.00th=[ 110], 00:21:36.140 | 99.00th=[ 129], 99.50th=[ 130], 99.90th=[ 132], 99.95th=[ 132], 00:21:36.140 | 99.99th=[ 132] 00:21:36.140 bw ( KiB/s): min= 664, max= 1192, per=4.49%, avg=917.05, stdev=139.34, samples=19 00:21:36.140 iops : min= 166, max= 298, avg=229.26, stdev=34.83, samples=19 00:21:36.140 lat (msec) : 20=0.40%, 50=20.73%, 100=68.82%, 250=10.06% 00:21:36.140 cpu : usr=35.98%, sys=1.96%, ctx=1076, majf=0, minf=9 00:21:36.140 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:36.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.140 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.140 issued rwts: total=2277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.140 filename0: (groupid=0, jobs=1): err= 0: pid=83968: Wed Nov 20 16:09:32 2024 00:21:36.140 read: IOPS=204, BW=819KiB/s (838kB/s)(8192KiB/10005msec) 00:21:36.140 slat (usec): min=8, max=8037, avg=29.56, stdev=354.06 00:21:36.140 clat (msec): min=5, max=154, avg=77.99, stdev=23.94 00:21:36.140 lat (msec): min=5, max=154, avg=78.02, stdev=23.94 00:21:36.140 clat percentiles (msec): 00:21:36.140 | 1.00th=[ 15], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 60], 00:21:36.140 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:21:36.140 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 120], 00:21:36.140 | 99.00th=[ 132], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:21:36.140 | 99.99th=[ 155] 00:21:36.140 bw ( KiB/s): min= 512, max= 1026, per=3.98%, avg=812.32, stdev=148.02, samples=19 00:21:36.140 iops : min= 128, max= 256, avg=203.05, stdev=36.96, samples=19 00:21:36.140 lat (msec) : 10=0.98%, 20=0.29%, 50=12.50%, 100=69.24%, 250=16.99% 00:21:36.140 cpu : usr=35.38%, sys=2.32%, ctx=981, majf=0, minf=9 00:21:36.140 IO depths : 1=0.1%, 2=2.9%, 4=11.6%, 8=71.2%, 16=14.3%, 32=0.0%, >=64=0.0% 00:21:36.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.140 complete : 0=0.0%, 4=90.2%, 8=7.2%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.140 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.140 filename0: (groupid=0, jobs=1): err= 0: pid=83969: Wed Nov 20 16:09:32 2024 00:21:36.140 read: IOPS=216, BW=865KiB/s (885kB/s)(8648KiB/10001msec) 00:21:36.140 slat (usec): min=7, max=8026, avg=18.21, stdev=172.36 00:21:36.140 clat (usec): min=1873, max=132013, avg=73901.05, stdev=22763.32 00:21:36.140 lat (usec): min=1881, max=132028, avg=73919.27, stdev=22766.95 00:21:36.140 clat percentiles (msec): 00:21:36.140 | 1.00th=[ 6], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 57], 00:21:36.140 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:21:36.140 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 110], 00:21:36.140 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:21:36.140 | 99.99th=[ 132] 00:21:36.140 bw ( KiB/s): min= 528, max= 1024, per=4.19%, avg=856.42, stdev=146.10, samples=19 00:21:36.140 iops : min= 132, max= 256, avg=214.11, stdev=36.52, samples=19 00:21:36.140 lat (msec) : 2=0.32%, 4=0.28%, 10=0.74%, 20=0.28%, 50=16.28% 00:21:36.140 lat (msec) : 100=68.92%, 250=13.18% 00:21:36.140 cpu : usr=31.96%, sys=1.88%, ctx=944, majf=0, minf=9 00:21:36.140 IO depths : 1=0.1%, 2=2.2%, 4=8.8%, 8=74.4%, 16=14.6%, 32=0.0%, >=64=0.0% 00:21:36.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.140 complete : 0=0.0%, 4=89.3%, 8=8.8%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.140 issued rwts: total=2162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.140 filename0: (groupid=0, jobs=1): err= 0: pid=83970: Wed Nov 20 16:09:32 2024 00:21:36.140 read: IOPS=206, BW=824KiB/s (844kB/s)(8292KiB/10060msec) 00:21:36.140 slat (usec): min=4, max=8028, avg=31.96, stdev=362.57 00:21:36.140 clat (msec): min=12, max=167, avg=77.34, stdev=25.07 00:21:36.140 lat (msec): min=12, max=167, avg=77.37, stdev=25.07 00:21:36.140 clat percentiles (msec): 00:21:36.140 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 61], 00:21:36.140 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:21:36.140 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 112], 95.00th=[ 121], 00:21:36.140 | 99.00th=[ 148], 99.50th=[ 167], 99.90th=[ 167], 99.95th=[ 167], 00:21:36.140 | 99.99th=[ 167] 00:21:36.140 bw ( KiB/s): min= 400, max= 1523, per=4.04%, avg=825.25, stdev=217.94, samples=20 00:21:36.140 iops : min= 100, max= 380, avg=206.25, stdev=54.35, samples=20 00:21:36.140 lat (msec) : 20=0.10%, 50=14.04%, 100=70.04%, 250=15.82% 00:21:36.140 cpu : usr=35.18%, sys=2.03%, ctx=982, majf=0, minf=9 00:21:36.140 IO depths : 1=0.1%, 2=2.9%, 4=11.7%, 8=70.7%, 16=14.6%, 32=0.0%, >=64=0.0% 00:21:36.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.140 complete : 0=0.0%, 4=90.6%, 8=6.9%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.140 issued rwts: total=2073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.140 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.140 filename0: (groupid=0, jobs=1): err= 0: pid=83971: Wed Nov 20 16:09:32 2024 00:21:36.140 read: IOPS=231, BW=925KiB/s (947kB/s)(9324KiB/10082msec) 00:21:36.140 slat (usec): min=3, max=4021, avg=17.21, stdev=116.63 00:21:36.140 clat (usec): min=1462, max=163102, avg=68857.73, stdev=33448.97 00:21:36.140 lat (usec): min=1470, max=163111, avg=68874.94, stdev=33446.48 00:21:36.140 clat percentiles (usec): 00:21:36.140 | 1.00th=[ 1582], 5.00th=[ 1663], 10.00th=[ 5211], 20.00th=[ 47449], 00:21:36.140 | 30.00th=[ 58459], 40.00th=[ 69731], 50.00th=[ 72877], 60.00th=[ 78119], 00:21:36.140 | 70.00th=[ 83362], 80.00th=[ 93848], 90.00th=[109577], 95.00th=[119014], 00:21:36.140 | 99.00th=[135267], 99.50th=[156238], 99.90th=[156238], 99.95th=[156238], 00:21:36.140 | 99.99th=[162530] 00:21:36.140 bw ( KiB/s): min= 512, max= 3544, per=4.55%, avg=928.70, stdev=630.35, samples=20 00:21:36.140 iops : min= 128, max= 886, avg=232.15, stdev=157.59, samples=20 00:21:36.140 lat (msec) : 2=8.15%, 4=1.50%, 10=1.67%, 20=2.45%, 50=8.71% 00:21:36.140 lat (msec) : 100=61.35%, 250=16.17% 00:21:36.140 cpu : usr=41.16%, sys=2.52%, ctx=1527, majf=0, minf=1 00:21:36.140 IO depths : 1=0.3%, 2=3.0%, 4=11.4%, 8=70.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:21:36.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 complete : 0=0.0%, 4=90.6%, 8=6.9%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 issued rwts: total=2331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.141 filename0: (groupid=0, jobs=1): err= 0: pid=83972: Wed Nov 20 16:09:32 2024 00:21:36.141 read: IOPS=213, BW=855KiB/s (876kB/s)(8556KiB/10006msec) 00:21:36.141 slat (usec): min=8, max=8030, avg=27.67, stdev=312.22 00:21:36.141 clat (msec): min=5, max=133, avg=74.71, stdev=22.24 00:21:36.141 lat (msec): min=5, max=133, avg=74.74, stdev=22.25 00:21:36.141 clat percentiles (msec): 00:21:36.141 | 1.00th=[ 18], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 58], 00:21:36.141 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:21:36.141 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 108], 95.00th=[ 112], 00:21:36.141 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 134], 99.95th=[ 134], 00:21:36.141 | 99.99th=[ 134] 00:21:36.141 bw ( KiB/s): min= 528, max= 1152, per=4.17%, avg=852.21, stdev=151.82, samples=19 00:21:36.141 iops : min= 132, max= 288, avg=213.05, stdev=37.96, samples=19 00:21:36.141 lat (msec) : 10=0.75%, 20=0.33%, 50=15.66%, 100=69.85%, 250=13.42% 00:21:36.141 cpu : usr=33.41%, sys=1.83%, ctx=923, majf=0, minf=0 00:21:36.141 IO depths : 1=0.1%, 2=2.3%, 4=9.3%, 8=73.7%, 16=14.7%, 32=0.0%, >=64=0.0% 00:21:36.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 complete : 0=0.0%, 4=89.6%, 8=8.4%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 issued rwts: total=2139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.141 filename0: (groupid=0, jobs=1): err= 0: pid=83973: Wed Nov 20 16:09:32 2024 00:21:36.141 read: IOPS=222, BW=889KiB/s (910kB/s)(8908KiB/10020msec) 00:21:36.141 slat (usec): min=8, max=11031, avg=24.51, stdev=264.06 00:21:36.141 clat (msec): min=26, max=133, avg=71.85, stdev=21.33 00:21:36.141 lat (msec): min=26, max=133, avg=71.87, stdev=21.33 00:21:36.141 clat percentiles (msec): 00:21:36.141 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 53], 00:21:36.141 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:21:36.141 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 106], 95.00th=[ 112], 00:21:36.141 | 99.00th=[ 125], 99.50th=[ 129], 99.90th=[ 134], 99.95th=[ 134], 00:21:36.141 | 99.99th=[ 134] 00:21:36.141 bw ( KiB/s): min= 616, max= 1138, per=4.34%, avg=887.20, stdev=143.86, samples=20 00:21:36.141 iops : min= 154, max= 284, avg=221.75, stdev=35.91, samples=20 00:21:36.141 lat (msec) : 50=16.61%, 100=71.62%, 250=11.76% 00:21:36.141 cpu : usr=39.92%, sys=2.38%, ctx=1353, majf=0, minf=9 00:21:36.141 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:21:36.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 complete : 0=0.0%, 4=87.5%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 issued rwts: total=2227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.141 filename0: (groupid=0, jobs=1): err= 0: pid=83974: Wed Nov 20 16:09:32 2024 00:21:36.141 read: IOPS=229, BW=916KiB/s (938kB/s)(9168KiB/10008msec) 00:21:36.141 slat (usec): min=5, max=8045, avg=28.07, stdev=334.93 00:21:36.141 clat (msec): min=16, max=146, avg=69.72, stdev=21.71 00:21:36.141 lat (msec): min=16, max=146, avg=69.75, stdev=21.72 00:21:36.141 clat percentiles (msec): 00:21:36.141 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 49], 00:21:36.141 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 72], 00:21:36.141 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 109], 00:21:36.141 | 99.00th=[ 128], 99.50th=[ 130], 99.90th=[ 136], 99.95th=[ 136], 00:21:36.141 | 99.99th=[ 146] 00:21:36.141 bw ( KiB/s): min= 664, max= 1240, per=4.52%, avg=922.95, stdev=145.38, samples=19 00:21:36.141 iops : min= 166, max= 310, avg=230.74, stdev=36.35, samples=19 00:21:36.141 lat (msec) : 20=0.26%, 50=21.99%, 100=68.15%, 250=9.60% 00:21:36.141 cpu : usr=35.21%, sys=2.48%, ctx=1030, majf=0, minf=9 00:21:36.141 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:36.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 issued rwts: total=2292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.141 filename1: (groupid=0, jobs=1): err= 0: pid=83975: Wed Nov 20 16:09:32 2024 00:21:36.141 read: IOPS=196, BW=787KiB/s (805kB/s)(7896KiB/10038msec) 00:21:36.141 slat (usec): min=8, max=4027, avg=28.58, stdev=232.01 00:21:36.141 clat (msec): min=16, max=171, avg=81.07, stdev=21.09 00:21:36.141 lat (msec): min=16, max=171, avg=81.10, stdev=21.09 00:21:36.141 clat percentiles (msec): 00:21:36.141 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 67], 00:21:36.141 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 83], 00:21:36.141 | 70.00th=[ 91], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 118], 00:21:36.141 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 171], 99.95th=[ 171], 00:21:36.141 | 99.99th=[ 171] 00:21:36.141 bw ( KiB/s): min= 512, max= 1040, per=3.84%, avg=785.60, stdev=139.41, samples=20 00:21:36.141 iops : min= 128, max= 260, avg=196.35, stdev=34.83, samples=20 00:21:36.141 lat (msec) : 20=0.10%, 50=6.74%, 100=72.95%, 250=20.21% 00:21:36.141 cpu : usr=41.78%, sys=2.86%, ctx=1407, majf=0, minf=9 00:21:36.141 IO depths : 1=0.1%, 2=4.3%, 4=17.1%, 8=64.9%, 16=13.5%, 32=0.0%, >=64=0.0% 00:21:36.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 complete : 0=0.0%, 4=92.0%, 8=4.2%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 issued rwts: total=1974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.141 filename1: (groupid=0, jobs=1): err= 0: pid=83976: Wed Nov 20 16:09:32 2024 00:21:36.141 read: IOPS=205, BW=823KiB/s (843kB/s)(8312KiB/10102msec) 00:21:36.141 slat (usec): min=8, max=8026, avg=18.46, stdev=175.86 00:21:36.141 clat (msec): min=14, max=167, avg=77.60, stdev=25.21 00:21:36.141 lat (msec): min=14, max=167, avg=77.62, stdev=25.21 00:21:36.141 clat percentiles (msec): 00:21:36.141 | 1.00th=[ 18], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 61], 00:21:36.141 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:21:36.141 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:21:36.141 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:21:36.141 | 99.99th=[ 169] 00:21:36.141 bw ( KiB/s): min= 528, max= 1648, per=4.04%, avg=824.70, stdev=229.43, samples=20 00:21:36.141 iops : min= 132, max= 412, avg=206.15, stdev=57.35, samples=20 00:21:36.141 lat (msec) : 20=1.64%, 50=12.13%, 100=70.93%, 250=15.30% 00:21:36.141 cpu : usr=32.77%, sys=1.94%, ctx=903, majf=0, minf=9 00:21:36.141 IO depths : 1=0.1%, 2=2.0%, 4=7.9%, 8=74.3%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:36.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 complete : 0=0.0%, 4=90.0%, 8=8.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 issued rwts: total=2078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.141 filename1: (groupid=0, jobs=1): err= 0: pid=83977: Wed Nov 20 16:09:32 2024 00:21:36.141 read: IOPS=201, BW=808KiB/s (827kB/s)(8132KiB/10068msec) 00:21:36.141 slat (usec): min=6, max=8022, avg=20.56, stdev=198.67 00:21:36.141 clat (msec): min=11, max=168, avg=79.03, stdev=24.26 00:21:36.141 lat (msec): min=11, max=168, avg=79.05, stdev=24.27 00:21:36.141 clat percentiles (msec): 00:21:36.141 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 62], 00:21:36.141 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 84], 00:21:36.141 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 121], 00:21:36.141 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 140], 99.95th=[ 169], 00:21:36.141 | 99.99th=[ 169] 00:21:36.141 bw ( KiB/s): min= 512, max= 1552, per=3.95%, avg=807.90, stdev=215.49, samples=20 00:21:36.141 iops : min= 128, max= 388, avg=201.95, stdev=53.86, samples=20 00:21:36.141 lat (msec) : 20=3.25%, 50=9.20%, 100=70.04%, 250=17.51% 00:21:36.141 cpu : usr=34.93%, sys=2.28%, ctx=993, majf=0, minf=9 00:21:36.141 IO depths : 1=0.1%, 2=3.2%, 4=13.1%, 8=68.9%, 16=14.7%, 32=0.0%, >=64=0.0% 00:21:36.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 complete : 0=0.0%, 4=91.3%, 8=5.8%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 issued rwts: total=2033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.141 filename1: (groupid=0, jobs=1): err= 0: pid=83978: Wed Nov 20 16:09:32 2024 00:21:36.141 read: IOPS=227, BW=910KiB/s (932kB/s)(9176KiB/10085msec) 00:21:36.141 slat (usec): min=3, max=8022, avg=18.14, stdev=167.31 00:21:36.141 clat (msec): min=6, max=155, avg=70.11, stdev=24.03 00:21:36.141 lat (msec): min=6, max=155, avg=70.13, stdev=24.04 00:21:36.141 clat percentiles (msec): 00:21:36.141 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 44], 20.00th=[ 51], 00:21:36.141 | 30.00th=[ 57], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 75], 00:21:36.141 | 70.00th=[ 80], 80.00th=[ 89], 90.00th=[ 105], 95.00th=[ 113], 00:21:36.141 | 99.00th=[ 129], 99.50th=[ 130], 99.90th=[ 131], 99.95th=[ 144], 00:21:36.141 | 99.99th=[ 157] 00:21:36.141 bw ( KiB/s): min= 584, max= 1800, per=4.46%, avg=911.05, stdev=254.09, samples=20 00:21:36.141 iops : min= 146, max= 450, avg=227.75, stdev=63.51, samples=20 00:21:36.141 lat (msec) : 10=0.61%, 20=1.61%, 50=17.44%, 100=69.05%, 250=11.29% 00:21:36.141 cpu : usr=40.87%, sys=2.85%, ctx=1338, majf=0, minf=9 00:21:36.141 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.3%, 16=16.0%, 32=0.0%, >=64=0.0% 00:21:36.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 complete : 0=0.0%, 4=87.5%, 8=12.1%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.141 issued rwts: total=2294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.141 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.141 filename1: (groupid=0, jobs=1): err= 0: pid=83979: Wed Nov 20 16:09:32 2024 00:21:36.141 read: IOPS=214, BW=860KiB/s (880kB/s)(8620KiB/10029msec) 00:21:36.141 slat (usec): min=3, max=8038, avg=18.51, stdev=173.00 00:21:36.141 clat (msec): min=26, max=151, avg=74.33, stdev=22.23 00:21:36.141 lat (msec): min=26, max=151, avg=74.35, stdev=22.24 00:21:36.141 clat percentiles (msec): 00:21:36.141 | 1.00th=[ 31], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 55], 00:21:36.141 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:21:36.142 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 108], 95.00th=[ 113], 00:21:36.142 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 146], 99.95th=[ 153], 00:21:36.142 | 99.99th=[ 153] 00:21:36.142 bw ( KiB/s): min= 512, max= 1264, per=4.19%, avg=855.50, stdev=176.90, samples=20 00:21:36.142 iops : min= 128, max= 316, avg=213.85, stdev=44.22, samples=20 00:21:36.142 lat (msec) : 50=15.31%, 100=70.26%, 250=14.43% 00:21:36.142 cpu : usr=40.51%, sys=2.41%, ctx=1269, majf=0, minf=9 00:21:36.142 IO depths : 1=0.1%, 2=2.3%, 4=9.0%, 8=74.1%, 16=14.6%, 32=0.0%, >=64=0.0% 00:21:36.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 complete : 0=0.0%, 4=89.4%, 8=8.6%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 issued rwts: total=2155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.142 filename1: (groupid=0, jobs=1): err= 0: pid=83980: Wed Nov 20 16:09:32 2024 00:21:36.142 read: IOPS=217, BW=869KiB/s (889kB/s)(8708KiB/10025msec) 00:21:36.142 slat (usec): min=4, max=8036, avg=24.62, stdev=257.65 00:21:36.142 clat (msec): min=26, max=144, avg=73.52, stdev=21.73 00:21:36.142 lat (msec): min=26, max=144, avg=73.55, stdev=21.73 00:21:36.142 clat percentiles (msec): 00:21:36.142 | 1.00th=[ 31], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 56], 00:21:36.142 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:21:36.142 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 110], 00:21:36.142 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 144], 00:21:36.142 | 99.99th=[ 144] 00:21:36.142 bw ( KiB/s): min= 584, max= 1264, per=4.24%, avg=866.70, stdev=159.26, samples=20 00:21:36.142 iops : min= 146, max= 316, avg=216.65, stdev=39.81, samples=20 00:21:36.142 lat (msec) : 50=16.58%, 100=70.56%, 250=12.86% 00:21:36.142 cpu : usr=32.10%, sys=2.11%, ctx=943, majf=0, minf=9 00:21:36.142 IO depths : 1=0.1%, 2=1.4%, 4=5.5%, 8=77.9%, 16=15.2%, 32=0.0%, >=64=0.0% 00:21:36.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 issued rwts: total=2177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.142 filename1: (groupid=0, jobs=1): err= 0: pid=83981: Wed Nov 20 16:09:32 2024 00:21:36.142 read: IOPS=221, BW=886KiB/s (907kB/s)(8900KiB/10046msec) 00:21:36.142 slat (usec): min=5, max=8022, avg=19.63, stdev=189.89 00:21:36.142 clat (msec): min=23, max=139, avg=72.03, stdev=21.63 00:21:36.142 lat (msec): min=23, max=139, avg=72.05, stdev=21.63 00:21:36.142 clat percentiles (msec): 00:21:36.142 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 51], 00:21:36.142 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:21:36.142 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 107], 95.00th=[ 112], 00:21:36.142 | 99.00th=[ 124], 99.50th=[ 131], 99.90th=[ 140], 99.95th=[ 140], 00:21:36.142 | 99.99th=[ 140] 00:21:36.142 bw ( KiB/s): min= 608, max= 1168, per=4.33%, avg=883.45, stdev=156.84, samples=20 00:21:36.142 iops : min= 152, max= 292, avg=220.85, stdev=39.19, samples=20 00:21:36.142 lat (msec) : 50=20.18%, 100=67.51%, 250=12.31% 00:21:36.142 cpu : usr=34.58%, sys=2.04%, ctx=1040, majf=0, minf=9 00:21:36.142 IO depths : 1=0.1%, 2=0.5%, 4=1.9%, 8=81.6%, 16=15.9%, 32=0.0%, >=64=0.0% 00:21:36.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 issued rwts: total=2225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.142 filename1: (groupid=0, jobs=1): err= 0: pid=83982: Wed Nov 20 16:09:32 2024 00:21:36.142 read: IOPS=197, BW=790KiB/s (809kB/s)(7948KiB/10058msec) 00:21:36.142 slat (usec): min=8, max=7303, avg=21.34, stdev=186.66 00:21:36.142 clat (msec): min=17, max=166, avg=80.75, stdev=22.96 00:21:36.142 lat (msec): min=17, max=166, avg=80.77, stdev=22.97 00:21:36.142 clat percentiles (msec): 00:21:36.142 | 1.00th=[ 29], 5.00th=[ 40], 10.00th=[ 54], 20.00th=[ 67], 00:21:36.142 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 83], 00:21:36.142 | 70.00th=[ 91], 80.00th=[ 102], 90.00th=[ 113], 95.00th=[ 121], 00:21:36.142 | 99.00th=[ 131], 99.50th=[ 133], 99.90th=[ 167], 99.95th=[ 167], 00:21:36.142 | 99.99th=[ 167] 00:21:36.142 bw ( KiB/s): min= 513, max= 1405, per=3.86%, avg=788.30, stdev=194.51, samples=20 00:21:36.142 iops : min= 128, max= 351, avg=196.90, stdev=48.62, samples=20 00:21:36.142 lat (msec) : 20=0.70%, 50=7.65%, 100=71.41%, 250=20.23% 00:21:36.142 cpu : usr=41.96%, sys=2.73%, ctx=1327, majf=0, minf=9 00:21:36.142 IO depths : 1=0.1%, 2=3.9%, 4=15.8%, 8=66.1%, 16=14.1%, 32=0.0%, >=64=0.0% 00:21:36.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 complete : 0=0.0%, 4=91.9%, 8=4.6%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 issued rwts: total=1987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.142 filename2: (groupid=0, jobs=1): err= 0: pid=83983: Wed Nov 20 16:09:32 2024 00:21:36.142 read: IOPS=218, BW=873KiB/s (894kB/s)(8744KiB/10018msec) 00:21:36.142 slat (usec): min=8, max=8027, avg=19.61, stdev=191.71 00:21:36.142 clat (msec): min=29, max=150, avg=73.22, stdev=21.45 00:21:36.142 lat (msec): min=29, max=150, avg=73.24, stdev=21.45 00:21:36.142 clat percentiles (msec): 00:21:36.142 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 48], 20.00th=[ 55], 00:21:36.142 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:21:36.142 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 112], 00:21:36.142 | 99.00th=[ 127], 99.50th=[ 130], 99.90th=[ 138], 99.95th=[ 138], 00:21:36.142 | 99.99th=[ 150] 00:21:36.142 bw ( KiB/s): min= 608, max= 1152, per=4.25%, avg=867.90, stdev=149.43, samples=20 00:21:36.142 iops : min= 152, max= 288, avg=216.95, stdev=37.35, samples=20 00:21:36.142 lat (msec) : 50=17.57%, 100=70.04%, 250=12.40% 00:21:36.142 cpu : usr=34.45%, sys=1.93%, ctx=1050, majf=0, minf=9 00:21:36.142 IO depths : 1=0.1%, 2=1.1%, 4=4.1%, 8=79.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:21:36.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 complete : 0=0.0%, 4=88.1%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 issued rwts: total=2186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.142 filename2: (groupid=0, jobs=1): err= 0: pid=83984: Wed Nov 20 16:09:32 2024 00:21:36.142 read: IOPS=197, BW=790KiB/s (809kB/s)(7944KiB/10051msec) 00:21:36.142 slat (usec): min=6, max=8026, avg=27.72, stdev=312.20 00:21:36.142 clat (msec): min=26, max=161, avg=80.74, stdev=22.53 00:21:36.142 lat (msec): min=26, max=162, avg=80.77, stdev=22.54 00:21:36.142 clat percentiles (msec): 00:21:36.142 | 1.00th=[ 30], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 68], 00:21:36.142 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:21:36.142 | 70.00th=[ 92], 80.00th=[ 97], 90.00th=[ 110], 95.00th=[ 121], 00:21:36.142 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 163], 99.95th=[ 163], 00:21:36.142 | 99.99th=[ 163] 00:21:36.142 bw ( KiB/s): min= 512, max= 1152, per=3.85%, avg=787.70, stdev=150.56, samples=20 00:21:36.142 iops : min= 128, max= 288, avg=196.90, stdev=37.63, samples=20 00:21:36.142 lat (msec) : 50=9.62%, 100=71.80%, 250=18.58% 00:21:36.142 cpu : usr=37.06%, sys=2.17%, ctx=1217, majf=0, minf=9 00:21:36.142 IO depths : 1=0.1%, 2=3.6%, 4=14.2%, 8=67.9%, 16=14.2%, 32=0.0%, >=64=0.0% 00:21:36.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 complete : 0=0.0%, 4=91.3%, 8=5.6%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.142 filename2: (groupid=0, jobs=1): err= 0: pid=83985: Wed Nov 20 16:09:32 2024 00:21:36.142 read: IOPS=219, BW=879KiB/s (900kB/s)(8816KiB/10035msec) 00:21:36.142 slat (usec): min=5, max=8035, avg=34.22, stdev=390.77 00:21:36.142 clat (msec): min=27, max=132, avg=72.64, stdev=20.72 00:21:36.142 lat (msec): min=27, max=132, avg=72.67, stdev=20.73 00:21:36.142 clat percentiles (msec): 00:21:36.142 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 54], 00:21:36.142 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:21:36.142 | 70.00th=[ 82], 80.00th=[ 88], 90.00th=[ 106], 95.00th=[ 111], 00:21:36.142 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 133], 99.95th=[ 133], 00:21:36.142 | 99.99th=[ 133] 00:21:36.142 bw ( KiB/s): min= 664, max= 1264, per=4.29%, avg=875.15, stdev=148.32, samples=20 00:21:36.142 iops : min= 166, max= 316, avg=218.75, stdev=37.10, samples=20 00:21:36.142 lat (msec) : 50=16.61%, 100=72.19%, 250=11.21% 00:21:36.142 cpu : usr=35.86%, sys=1.97%, ctx=1031, majf=0, minf=9 00:21:36.142 IO depths : 1=0.1%, 2=1.1%, 4=4.1%, 8=79.3%, 16=15.4%, 32=0.0%, >=64=0.0% 00:21:36.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 complete : 0=0.0%, 4=88.1%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.142 filename2: (groupid=0, jobs=1): err= 0: pid=83986: Wed Nov 20 16:09:32 2024 00:21:36.142 read: IOPS=203, BW=814KiB/s (834kB/s)(8196KiB/10065msec) 00:21:36.142 slat (usec): min=4, max=8026, avg=19.76, stdev=197.98 00:21:36.142 clat (msec): min=13, max=140, avg=78.29, stdev=24.77 00:21:36.142 lat (msec): min=13, max=140, avg=78.31, stdev=24.77 00:21:36.142 clat percentiles (msec): 00:21:36.142 | 1.00th=[ 15], 5.00th=[ 32], 10.00th=[ 48], 20.00th=[ 62], 00:21:36.142 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:21:36.142 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 122], 00:21:36.142 | 99.00th=[ 136], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:21:36.142 | 99.99th=[ 140] 00:21:36.142 bw ( KiB/s): min= 512, max= 1648, per=3.99%, avg=815.50, stdev=228.15, samples=20 00:21:36.142 iops : min= 128, max= 412, avg=203.85, stdev=57.03, samples=20 00:21:36.142 lat (msec) : 20=2.34%, 50=9.52%, 100=72.23%, 250=15.91% 00:21:36.142 cpu : usr=35.25%, sys=2.12%, ctx=1083, majf=0, minf=9 00:21:36.142 IO depths : 1=0.1%, 2=3.4%, 4=13.6%, 8=68.5%, 16=14.4%, 32=0.0%, >=64=0.0% 00:21:36.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 complete : 0=0.0%, 4=91.2%, 8=5.8%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.142 issued rwts: total=2049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.142 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.143 filename2: (groupid=0, jobs=1): err= 0: pid=83987: Wed Nov 20 16:09:32 2024 00:21:36.143 read: IOPS=222, BW=890KiB/s (912kB/s)(8964KiB/10069msec) 00:21:36.143 slat (usec): min=8, max=4029, avg=18.22, stdev=120.02 00:21:36.143 clat (msec): min=7, max=155, avg=71.72, stdev=24.08 00:21:36.143 lat (msec): min=7, max=155, avg=71.74, stdev=24.08 00:21:36.143 clat percentiles (msec): 00:21:36.143 | 1.00th=[ 18], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 52], 00:21:36.143 | 30.00th=[ 58], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 77], 00:21:36.143 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 108], 95.00th=[ 120], 00:21:36.143 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:21:36.143 | 99.99th=[ 157] 00:21:36.143 bw ( KiB/s): min= 560, max= 1568, per=4.36%, avg=891.30, stdev=221.60, samples=20 00:21:36.143 iops : min= 140, max= 392, avg=222.80, stdev=55.38, samples=20 00:21:36.143 lat (msec) : 10=0.09%, 20=1.52%, 50=16.96%, 100=67.83%, 250=13.61% 00:21:36.143 cpu : usr=38.48%, sys=2.19%, ctx=1158, majf=0, minf=9 00:21:36.143 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:21:36.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.143 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.143 issued rwts: total=2241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.143 filename2: (groupid=0, jobs=1): err= 0: pid=83988: Wed Nov 20 16:09:32 2024 00:21:36.143 read: IOPS=206, BW=825KiB/s (845kB/s)(8316KiB/10075msec) 00:21:36.143 slat (usec): min=8, max=8027, avg=28.11, stdev=291.24 00:21:36.143 clat (msec): min=13, max=156, avg=77.33, stdev=25.35 00:21:36.143 lat (msec): min=13, max=156, avg=77.36, stdev=25.35 00:21:36.143 clat percentiles (msec): 00:21:36.143 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 56], 00:21:36.143 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 80], 00:21:36.143 | 70.00th=[ 86], 80.00th=[ 99], 90.00th=[ 117], 95.00th=[ 121], 00:21:36.143 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:21:36.143 | 99.99th=[ 157] 00:21:36.143 bw ( KiB/s): min= 528, max= 1408, per=4.05%, avg=826.30, stdev=205.01, samples=20 00:21:36.143 iops : min= 132, max= 352, avg=206.55, stdev=51.24, samples=20 00:21:36.143 lat (msec) : 20=0.96%, 50=12.70%, 100=68.11%, 250=18.23% 00:21:36.143 cpu : usr=36.74%, sys=2.29%, ctx=1036, majf=0, minf=9 00:21:36.143 IO depths : 1=0.1%, 2=2.3%, 4=9.3%, 8=73.3%, 16=15.1%, 32=0.0%, >=64=0.0% 00:21:36.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.143 complete : 0=0.0%, 4=90.0%, 8=7.9%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.143 issued rwts: total=2079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.143 filename2: (groupid=0, jobs=1): err= 0: pid=83989: Wed Nov 20 16:09:32 2024 00:21:36.143 read: IOPS=228, BW=913KiB/s (935kB/s)(9144KiB/10016msec) 00:21:36.143 slat (usec): min=5, max=8036, avg=26.05, stdev=264.97 00:21:36.143 clat (msec): min=23, max=134, avg=69.94, stdev=21.48 00:21:36.143 lat (msec): min=23, max=134, avg=69.97, stdev=21.49 00:21:36.143 clat percentiles (msec): 00:21:36.143 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 51], 00:21:36.143 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 73], 00:21:36.143 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 111], 00:21:36.143 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 136], 00:21:36.143 | 99.99th=[ 136] 00:21:36.143 bw ( KiB/s): min= 616, max= 1296, per=4.46%, avg=910.80, stdev=162.89, samples=20 00:21:36.143 iops : min= 154, max= 324, avg=227.70, stdev=40.72, samples=20 00:21:36.143 lat (msec) : 50=20.12%, 100=69.38%, 250=10.50% 00:21:36.143 cpu : usr=39.15%, sys=2.29%, ctx=1265, majf=0, minf=9 00:21:36.143 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:21:36.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.143 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.143 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.143 filename2: (groupid=0, jobs=1): err= 0: pid=83990: Wed Nov 20 16:09:32 2024 00:21:36.143 read: IOPS=204, BW=818KiB/s (838kB/s)(8232KiB/10065msec) 00:21:36.143 slat (usec): min=3, max=8021, avg=20.39, stdev=197.53 00:21:36.143 clat (msec): min=12, max=155, avg=78.07, stdev=23.15 00:21:36.143 lat (msec): min=12, max=156, avg=78.09, stdev=23.14 00:21:36.143 clat percentiles (msec): 00:21:36.143 | 1.00th=[ 15], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 63], 00:21:36.143 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:21:36.143 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 120], 00:21:36.143 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 134], 99.95th=[ 157], 00:21:36.143 | 99.99th=[ 157] 00:21:36.143 bw ( KiB/s): min= 529, max= 1529, per=4.00%, avg=816.10, stdev=220.29, samples=20 00:21:36.143 iops : min= 132, max= 382, avg=203.90, stdev=55.06, samples=20 00:21:36.143 lat (msec) : 20=1.55%, 50=10.59%, 100=71.48%, 250=16.38% 00:21:36.143 cpu : usr=36.90%, sys=2.17%, ctx=1221, majf=0, minf=9 00:21:36.143 IO depths : 1=0.1%, 2=3.9%, 4=15.5%, 8=66.5%, 16=14.0%, 32=0.0%, >=64=0.0% 00:21:36.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.143 complete : 0=0.0%, 4=91.7%, 8=4.9%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.143 issued rwts: total=2058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.143 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:36.143 00:21:36.143 Run status group 0 (all jobs): 00:21:36.143 READ: bw=19.9MiB/s (20.9MB/s), 787KiB/s-925KiB/s (805kB/s-947kB/s), io=201MiB (211MB), run=10001-10102msec 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.143 bdev_null0 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.143 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.144 [2024-11-20 16:09:32.862157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.144 bdev_null1 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.144 { 00:21:36.144 "params": { 00:21:36.144 "name": "Nvme$subsystem", 00:21:36.144 "trtype": "$TEST_TRANSPORT", 00:21:36.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.144 "adrfam": "ipv4", 00:21:36.144 "trsvcid": "$NVMF_PORT", 00:21:36.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.144 "hdgst": ${hdgst:-false}, 00:21:36.144 "ddgst": ${ddgst:-false} 00:21:36.144 }, 00:21:36.144 "method": "bdev_nvme_attach_controller" 00:21:36.144 } 00:21:36.144 EOF 00:21:36.144 )") 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:36.144 { 00:21:36.144 "params": { 00:21:36.144 "name": "Nvme$subsystem", 00:21:36.144 "trtype": "$TEST_TRANSPORT", 00:21:36.144 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.144 "adrfam": "ipv4", 00:21:36.144 "trsvcid": "$NVMF_PORT", 00:21:36.144 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.144 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.144 "hdgst": ${hdgst:-false}, 00:21:36.144 "ddgst": ${ddgst:-false} 00:21:36.144 }, 00:21:36.144 "method": "bdev_nvme_attach_controller" 00:21:36.144 } 00:21:36.144 EOF 00:21:36.144 )") 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:36.144 "params": { 00:21:36.144 "name": "Nvme0", 00:21:36.144 "trtype": "tcp", 00:21:36.144 "traddr": "10.0.0.3", 00:21:36.144 "adrfam": "ipv4", 00:21:36.144 "trsvcid": "4420", 00:21:36.144 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:36.144 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:36.144 "hdgst": false, 00:21:36.144 "ddgst": false 00:21:36.144 }, 00:21:36.144 "method": "bdev_nvme_attach_controller" 00:21:36.144 },{ 00:21:36.144 "params": { 00:21:36.144 "name": "Nvme1", 00:21:36.144 "trtype": "tcp", 00:21:36.144 "traddr": "10.0.0.3", 00:21:36.144 "adrfam": "ipv4", 00:21:36.144 "trsvcid": "4420", 00:21:36.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.144 "hdgst": false, 00:21:36.144 "ddgst": false 00:21:36.144 }, 00:21:36.144 "method": "bdev_nvme_attach_controller" 00:21:36.144 }' 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:36.144 16:09:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:36.144 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:36.144 ... 00:21:36.144 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:21:36.144 ... 00:21:36.144 fio-3.35 00:21:36.144 Starting 4 threads 00:21:41.468 00:21:41.468 filename0: (groupid=0, jobs=1): err= 0: pid=84135: Wed Nov 20 16:09:38 2024 00:21:41.468 read: IOPS=1935, BW=15.1MiB/s (15.9MB/s)(75.7MiB/5003msec) 00:21:41.468 slat (nsec): min=4756, max=36183, avg=13848.41, stdev=3864.69 00:21:41.468 clat (usec): min=1032, max=7280, avg=4083.35, stdev=460.16 00:21:41.468 lat (usec): min=1041, max=7294, avg=4097.20, stdev=460.73 00:21:41.468 clat percentiles (usec): 00:21:41.468 | 1.00th=[ 2278], 5.00th=[ 3425], 10.00th=[ 3523], 20.00th=[ 4015], 00:21:41.468 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:21:41.468 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4490], 00:21:41.468 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 5473], 99.95th=[ 5473], 00:21:41.468 | 99.99th=[ 7308] 00:21:41.468 bw ( KiB/s): min=14464, max=16576, per=24.58%, avg=15450.67, stdev=707.81, samples=9 00:21:41.468 iops : min= 1808, max= 2072, avg=1931.33, stdev=88.48, samples=9 00:21:41.468 lat (msec) : 2=0.75%, 4=19.45%, 10=79.79% 00:21:41.468 cpu : usr=91.74%, sys=7.48%, ctx=17, majf=0, minf=0 00:21:41.468 IO depths : 1=0.1%, 2=20.4%, 4=53.7%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.468 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.468 issued rwts: total=9684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.468 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:41.468 filename0: (groupid=0, jobs=1): err= 0: pid=84136: Wed Nov 20 16:09:38 2024 00:21:41.468 read: IOPS=2068, BW=16.2MiB/s (16.9MB/s)(80.8MiB/5002msec) 00:21:41.468 slat (nsec): min=3963, max=36197, avg=12061.35, stdev=3850.46 00:21:41.468 clat (usec): min=1438, max=6604, avg=3827.17, stdev=948.13 00:21:41.468 lat (usec): min=1447, max=6616, avg=3839.23, stdev=948.60 00:21:41.468 clat percentiles (usec): 00:21:41.468 | 1.00th=[ 1467], 5.00th=[ 1483], 10.00th=[ 2999], 20.00th=[ 3458], 00:21:41.468 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4047], 60.00th=[ 4080], 00:21:41.468 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 5276], 00:21:41.468 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 6521], 99.95th=[ 6521], 00:21:41.468 | 99.99th=[ 6587] 00:21:41.468 bw ( KiB/s): min=13696, max=19888, per=26.46%, avg=16629.56, stdev=2181.77, samples=9 00:21:41.468 iops : min= 1712, max= 2486, avg=2078.67, stdev=272.75, samples=9 00:21:41.468 lat (msec) : 2=9.89%, 4=20.27%, 10=69.84% 00:21:41.468 cpu : usr=91.48%, sys=7.68%, ctx=10, majf=0, minf=0 00:21:41.468 IO depths : 1=0.1%, 2=13.9%, 4=56.7%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.468 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.468 issued rwts: total=10345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.468 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:41.468 filename1: (groupid=0, jobs=1): err= 0: pid=84137: Wed Nov 20 16:09:38 2024 00:21:41.468 read: IOPS=1937, BW=15.1MiB/s (15.9MB/s)(75.7MiB/5002msec) 00:21:41.468 slat (nsec): min=7673, max=38974, avg=15259.96, stdev=3561.97 00:21:41.468 clat (usec): min=1027, max=6663, avg=4072.11, stdev=468.39 00:21:41.468 lat (usec): min=1041, max=6678, avg=4087.37, stdev=468.82 00:21:41.468 clat percentiles (usec): 00:21:41.468 | 1.00th=[ 2147], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3982], 00:21:41.468 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4080], 00:21:41.468 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4490], 00:21:41.468 | 99.00th=[ 5342], 99.50th=[ 5407], 99.90th=[ 5473], 99.95th=[ 5473], 00:21:41.468 | 99.99th=[ 6652] 00:21:41.468 bw ( KiB/s): min=14448, max=16448, per=24.62%, avg=15473.78, stdev=715.07, samples=9 00:21:41.468 iops : min= 1806, max= 2056, avg=1934.22, stdev=89.38, samples=9 00:21:41.468 lat (msec) : 2=0.92%, 4=21.20%, 10=77.88% 00:21:41.468 cpu : usr=91.58%, sys=7.66%, ctx=5, majf=0, minf=0 00:21:41.468 IO depths : 1=0.1%, 2=20.4%, 4=53.7%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.468 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.468 issued rwts: total=9692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.468 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:41.468 filename1: (groupid=0, jobs=1): err= 0: pid=84138: Wed Nov 20 16:09:38 2024 00:21:41.468 read: IOPS=1916, BW=15.0MiB/s (15.7MB/s)(74.9MiB/5002msec) 00:21:41.468 slat (usec): min=4, max=163, avg=15.68, stdev= 4.04 00:21:41.468 clat (usec): min=1360, max=7310, avg=4115.65, stdev=431.89 00:21:41.468 lat (usec): min=1374, max=7318, avg=4131.33, stdev=431.71 00:21:41.468 clat percentiles (usec): 00:21:41.468 | 1.00th=[ 2704], 5.00th=[ 3458], 10.00th=[ 3589], 20.00th=[ 3982], 00:21:41.468 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:21:41.468 | 70.00th=[ 4359], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4752], 00:21:41.468 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 6456], 99.95th=[ 6456], 00:21:41.468 | 99.99th=[ 7308] 00:21:41.468 bw ( KiB/s): min=14464, max=16400, per=24.31%, avg=15277.89, stdev=636.77, samples=9 00:21:41.468 iops : min= 1808, max= 2050, avg=1909.67, stdev=79.66, samples=9 00:21:41.468 lat (msec) : 2=0.29%, 4=20.74%, 10=78.97% 00:21:41.468 cpu : usr=90.72%, sys=8.02%, ctx=46, majf=0, minf=0 00:21:41.468 IO depths : 1=0.1%, 2=21.1%, 4=53.3%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:41.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.468 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.468 issued rwts: total=9585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.468 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:41.468 00:21:41.468 Run status group 0 (all jobs): 00:21:41.468 READ: bw=61.4MiB/s (64.4MB/s), 15.0MiB/s-16.2MiB/s (15.7MB/s-16.9MB/s), io=307MiB (322MB), run=5002-5003msec 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.468 16:09:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:41.468 16:09:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.468 16:09:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:41.468 16:09:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.468 16:09:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:41.468 ************************************ 00:21:41.468 END TEST fio_dif_rand_params 00:21:41.468 ************************************ 00:21:41.468 16:09:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.468 00:21:41.468 real 0m23.773s 00:21:41.468 user 2m3.216s 00:21:41.468 sys 0m9.079s 00:21:41.468 16:09:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.468 16:09:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:41.468 16:09:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:41.468 16:09:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:41.468 16:09:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.468 16:09:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:41.468 ************************************ 00:21:41.468 START TEST fio_dif_digest 00:21:41.468 ************************************ 00:21:41.468 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:21:41.468 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:41.468 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:41.468 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:41.468 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:41.468 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:41.468 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:41.468 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:41.468 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:41.469 bdev_null0 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:41.469 [2024-11-20 16:09:39.098398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:41.469 { 00:21:41.469 "params": { 00:21:41.469 "name": "Nvme$subsystem", 00:21:41.469 "trtype": "$TEST_TRANSPORT", 00:21:41.469 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.469 "adrfam": "ipv4", 00:21:41.469 "trsvcid": "$NVMF_PORT", 00:21:41.469 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.469 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.469 "hdgst": ${hdgst:-false}, 00:21:41.469 "ddgst": ${ddgst:-false} 00:21:41.469 }, 00:21:41.469 "method": "bdev_nvme_attach_controller" 00:21:41.469 } 00:21:41.469 EOF 00:21:41.469 )") 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:41.469 "params": { 00:21:41.469 "name": "Nvme0", 00:21:41.469 "trtype": "tcp", 00:21:41.469 "traddr": "10.0.0.3", 00:21:41.469 "adrfam": "ipv4", 00:21:41.469 "trsvcid": "4420", 00:21:41.469 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:41.469 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:41.469 "hdgst": true, 00:21:41.469 "ddgst": true 00:21:41.469 }, 00:21:41.469 "method": "bdev_nvme_attach_controller" 00:21:41.469 }' 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:41.469 16:09:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:41.469 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:41.469 ... 00:21:41.469 fio-3.35 00:21:41.469 Starting 3 threads 00:21:53.704 00:21:53.704 filename0: (groupid=0, jobs=1): err= 0: pid=84245: Wed Nov 20 16:09:49 2024 00:21:53.704 read: IOPS=218, BW=27.4MiB/s (28.7MB/s)(274MiB/10006msec) 00:21:53.704 slat (nsec): min=7938, max=59767, avg=11439.32, stdev=5339.97 00:21:53.704 clat (usec): min=5960, max=14831, avg=13676.53, stdev=301.64 00:21:53.704 lat (usec): min=5969, max=14844, avg=13687.97, stdev=301.79 00:21:53.704 clat percentiles (usec): 00:21:53.704 | 1.00th=[13566], 5.00th=[13566], 10.00th=[13566], 20.00th=[13566], 00:21:53.704 | 30.00th=[13698], 40.00th=[13698], 50.00th=[13698], 60.00th=[13698], 00:21:53.704 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13829], 00:21:53.704 | 99.00th=[14091], 99.50th=[14091], 99.90th=[14877], 99.95th=[14877], 00:21:53.704 | 99.99th=[14877] 00:21:53.704 bw ( KiB/s): min=27648, max=28416, per=33.34%, avg=28011.79, stdev=393.98, samples=19 00:21:53.704 iops : min= 216, max= 222, avg=218.84, stdev= 3.08, samples=19 00:21:53.704 lat (msec) : 10=0.14%, 20=99.86% 00:21:53.704 cpu : usr=91.68%, sys=7.78%, ctx=50, majf=0, minf=0 00:21:53.704 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:53.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.704 issued rwts: total=2190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.704 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:53.704 filename0: (groupid=0, jobs=1): err= 0: pid=84246: Wed Nov 20 16:09:49 2024 00:21:53.704 read: IOPS=218, BW=27.3MiB/s (28.7MB/s)(274MiB/10009msec) 00:21:53.704 slat (nsec): min=7320, max=45334, avg=14533.47, stdev=2678.56 00:21:53.704 clat (usec): min=9792, max=14894, avg=13678.59, stdev=191.49 00:21:53.704 lat (usec): min=9807, max=14906, avg=13693.12, stdev=191.57 00:21:53.704 clat percentiles (usec): 00:21:53.704 | 1.00th=[13566], 5.00th=[13566], 10.00th=[13566], 20.00th=[13566], 00:21:53.704 | 30.00th=[13698], 40.00th=[13698], 50.00th=[13698], 60.00th=[13698], 00:21:53.704 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13829], 00:21:53.704 | 99.00th=[13960], 99.50th=[14353], 99.90th=[14877], 99.95th=[14877], 00:21:53.704 | 99.99th=[14877] 00:21:53.704 bw ( KiB/s): min=27592, max=28416, per=33.31%, avg=27990.80, stdev=394.79, samples=20 00:21:53.704 iops : min= 215, max= 222, avg=218.65, stdev= 3.12, samples=20 00:21:53.704 lat (msec) : 10=0.14%, 20=99.86% 00:21:53.704 cpu : usr=91.74%, sys=7.75%, ctx=10, majf=0, minf=0 00:21:53.704 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:53.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.704 issued rwts: total=2190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.704 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:53.704 filename0: (groupid=0, jobs=1): err= 0: pid=84247: Wed Nov 20 16:09:49 2024 00:21:53.704 read: IOPS=218, BW=27.3MiB/s (28.7MB/s)(274MiB/10009msec) 00:21:53.704 slat (nsec): min=5577, max=53590, avg=15492.18, stdev=3335.51 00:21:53.704 clat (usec): min=9789, max=14943, avg=13675.46, stdev=192.01 00:21:53.704 lat (usec): min=9803, max=14958, avg=13690.95, stdev=192.17 00:21:53.704 clat percentiles (usec): 00:21:53.704 | 1.00th=[13566], 5.00th=[13566], 10.00th=[13566], 20.00th=[13566], 00:21:53.704 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13698], 60.00th=[13698], 00:21:53.704 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13829], 00:21:53.704 | 99.00th=[13960], 99.50th=[14353], 99.90th=[14877], 99.95th=[14877], 00:21:53.704 | 99.99th=[15008] 00:21:53.704 bw ( KiB/s): min=27592, max=28416, per=33.31%, avg=27990.80, stdev=394.79, samples=20 00:21:53.704 iops : min= 215, max= 222, avg=218.65, stdev= 3.12, samples=20 00:21:53.704 lat (msec) : 10=0.14%, 20=99.86% 00:21:53.704 cpu : usr=91.76%, sys=7.65%, ctx=153, majf=0, minf=0 00:21:53.704 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:53.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.704 issued rwts: total=2190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.704 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:53.704 00:21:53.704 Run status group 0 (all jobs): 00:21:53.704 READ: bw=82.1MiB/s (86.0MB/s), 27.3MiB/s-27.4MiB/s (28.7MB/s-28.7MB/s), io=821MiB (861MB), run=10006-10009msec 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:53.704 ************************************ 00:21:53.704 END TEST fio_dif_digest 00:21:53.704 ************************************ 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.704 00:21:53.704 real 0m11.075s 00:21:53.704 user 0m28.259s 00:21:53.704 sys 0m2.580s 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.704 16:09:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:53.704 16:09:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:53.704 16:09:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:53.704 16:09:50 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:53.704 16:09:50 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:21:53.704 16:09:50 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:53.704 16:09:50 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:21:53.704 16:09:50 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.704 16:09:50 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:53.705 rmmod nvme_tcp 00:21:53.705 rmmod nvme_fabrics 00:21:53.705 rmmod nvme_keyring 00:21:53.705 16:09:50 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.705 16:09:50 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:21:53.705 16:09:50 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:21:53.705 16:09:50 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 83482 ']' 00:21:53.705 16:09:50 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 83482 00:21:53.705 16:09:50 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 83482 ']' 00:21:53.705 16:09:50 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 83482 00:21:53.705 16:09:50 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:21:53.705 16:09:50 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.705 16:09:50 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83482 00:21:53.705 killing process with pid 83482 00:21:53.705 16:09:50 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.705 16:09:50 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.705 16:09:50 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83482' 00:21:53.705 16:09:50 nvmf_dif -- common/autotest_common.sh@973 -- # kill 83482 00:21:53.705 16:09:50 nvmf_dif -- common/autotest_common.sh@978 -- # wait 83482 00:21:53.705 16:09:50 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:21:53.705 16:09:50 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:53.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:53.705 Waiting for block devices as requested 00:21:53.705 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:53.705 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.705 16:09:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:53.705 16:09:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.705 16:09:51 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:21:53.705 00:21:53.705 real 1m0.543s 00:21:53.705 user 3m48.136s 00:21:53.705 sys 0m20.553s 00:21:53.705 16:09:51 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.705 16:09:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:53.705 ************************************ 00:21:53.705 END TEST nvmf_dif 00:21:53.705 ************************************ 00:21:53.705 16:09:51 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:53.705 16:09:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:53.705 16:09:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.705 16:09:51 -- common/autotest_common.sh@10 -- # set +x 00:21:53.705 ************************************ 00:21:53.705 START TEST nvmf_abort_qd_sizes 00:21:53.705 ************************************ 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:53.705 * Looking for test storage... 00:21:53.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:53.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.705 --rc genhtml_branch_coverage=1 00:21:53.705 --rc genhtml_function_coverage=1 00:21:53.705 --rc genhtml_legend=1 00:21:53.705 --rc geninfo_all_blocks=1 00:21:53.705 --rc geninfo_unexecuted_blocks=1 00:21:53.705 00:21:53.705 ' 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:53.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.705 --rc genhtml_branch_coverage=1 00:21:53.705 --rc genhtml_function_coverage=1 00:21:53.705 --rc genhtml_legend=1 00:21:53.705 --rc geninfo_all_blocks=1 00:21:53.705 --rc geninfo_unexecuted_blocks=1 00:21:53.705 00:21:53.705 ' 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:53.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.705 --rc genhtml_branch_coverage=1 00:21:53.705 --rc genhtml_function_coverage=1 00:21:53.705 --rc genhtml_legend=1 00:21:53.705 --rc geninfo_all_blocks=1 00:21:53.705 --rc geninfo_unexecuted_blocks=1 00:21:53.705 00:21:53.705 ' 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:53.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.705 --rc genhtml_branch_coverage=1 00:21:53.705 --rc genhtml_function_coverage=1 00:21:53.705 --rc genhtml_legend=1 00:21:53.705 --rc geninfo_all_blocks=1 00:21:53.705 --rc geninfo_unexecuted_blocks=1 00:21:53.705 00:21:53.705 ' 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.705 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:53.706 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:53.706 Cannot find device "nvmf_init_br" 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:53.706 Cannot find device "nvmf_init_br2" 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:53.706 Cannot find device "nvmf_tgt_br" 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:53.706 Cannot find device "nvmf_tgt_br2" 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:53.706 Cannot find device "nvmf_init_br" 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:53.706 Cannot find device "nvmf_init_br2" 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:53.706 Cannot find device "nvmf_tgt_br" 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:53.706 Cannot find device "nvmf_tgt_br2" 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:53.706 Cannot find device "nvmf_br" 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:53.706 Cannot find device "nvmf_init_if" 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:53.706 Cannot find device "nvmf_init_if2" 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:53.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:53.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:53.706 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:53.965 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:53.965 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:53.965 16:09:51 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:53.965 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:53.965 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:53.965 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:53.965 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:53.965 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:53.965 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:53.965 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:53.965 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:53.965 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:21:53.965 00:21:53.965 --- 10.0.0.3 ping statistics --- 00:21:53.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.965 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:53.965 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:53.965 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:53.965 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:21:53.965 00:21:53.965 --- 10.0.0.4 ping statistics --- 00:21:53.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.965 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:53.965 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:53.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:53.965 00:21:53.965 --- 10.0.0.1 ping statistics --- 00:21:53.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.965 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:53.965 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:53.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:21:53.965 00:21:53.965 --- 10.0.0.2 ping statistics --- 00:21:53.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.966 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:53.966 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.966 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:21:53.966 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:21:53.966 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:54.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:54.532 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:54.791 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84892 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84892 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 84892 ']' 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:54.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.791 16:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.792 16:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:54.792 16:09:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:54.792 [2024-11-20 16:09:52.969888] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:21:54.792 [2024-11-20 16:09:52.969982] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.051 [2024-11-20 16:09:53.122724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.051 [2024-11-20 16:09:53.197249] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.051 [2024-11-20 16:09:53.197313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.051 [2024-11-20 16:09:53.197328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.051 [2024-11-20 16:09:53.197338] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.051 [2024-11-20 16:09:53.197347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.051 [2024-11-20 16:09:53.198634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.051 [2024-11-20 16:09:53.198700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.051 [2024-11-20 16:09:53.198764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.051 [2024-11-20 16:09:53.198767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.051 [2024-11-20 16:09:53.256076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:55.986 16:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.986 16:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:21:55.986 16:09:53 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:55.986 16:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:55.986 16:09:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.986 16:09:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:55.986 ************************************ 00:21:55.986 START TEST spdk_target_abort 00:21:55.986 ************************************ 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:55.986 spdk_targetn1 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:55.986 [2024-11-20 16:09:54.128565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:55.986 [2024-11-20 16:09:54.173390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:55.986 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:55.987 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:55.987 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:55.987 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:55.987 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:55.987 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:55.987 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:21:55.987 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:55.987 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:55.987 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:55.987 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:55.987 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:55.987 16:09:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:59.269 Initializing NVMe Controllers 00:21:59.269 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:21:59.269 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:59.269 Initialization complete. Launching workers. 00:21:59.269 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10860, failed: 0 00:21:59.269 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1020, failed to submit 9840 00:21:59.269 success 829, unsuccessful 191, failed 0 00:21:59.269 16:09:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:59.269 16:09:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:03.457 Initializing NVMe Controllers 00:22:03.457 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:03.457 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:03.457 Initialization complete. Launching workers. 00:22:03.457 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8928, failed: 0 00:22:03.457 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1174, failed to submit 7754 00:22:03.457 success 417, unsuccessful 757, failed 0 00:22:03.457 16:10:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:03.457 16:10:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:05.988 Initializing NVMe Controllers 00:22:05.988 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:22:05.988 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:05.988 Initialization complete. Launching workers. 00:22:05.988 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31513, failed: 0 00:22:05.988 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2190, failed to submit 29323 00:22:05.988 success 435, unsuccessful 1755, failed 0 00:22:05.988 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:22:05.988 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.988 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:05.988 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.988 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:22:05.988 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.988 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:06.556 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.556 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84892 00:22:06.556 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 84892 ']' 00:22:06.556 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 84892 00:22:06.556 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:22:06.556 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.556 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84892 00:22:06.556 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:06.556 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:06.556 killing process with pid 84892 00:22:06.556 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84892' 00:22:06.556 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 84892 00:22:06.556 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 84892 00:22:06.817 00:22:06.817 real 0m10.860s 00:22:06.817 user 0m43.516s 00:22:06.817 sys 0m2.129s 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:06.817 ************************************ 00:22:06.817 END TEST spdk_target_abort 00:22:06.817 ************************************ 00:22:06.817 16:10:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:22:06.817 16:10:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:06.817 16:10:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.817 16:10:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:06.817 ************************************ 00:22:06.817 START TEST kernel_target_abort 00:22:06.817 ************************************ 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:06.817 16:10:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:07.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:07.334 Waiting for block devices as requested 00:22:07.335 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:07.335 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:07.335 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:07.335 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:07.335 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:07.335 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:07.335 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:07.335 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:07.335 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:07.335 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:07.335 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:07.594 No valid GPT data, bailing 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:07.594 No valid GPT data, bailing 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:07.594 No valid GPT data, bailing 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:07.594 No valid GPT data, bailing 00:22:07.594 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 --hostid=ca768c1a-78f6-4242-8009-85e76e7a8123 -a 10.0.0.1 -t tcp -s 4420 00:22:07.852 00:22:07.852 Discovery Log Number of Records 2, Generation counter 2 00:22:07.852 =====Discovery Log Entry 0====== 00:22:07.852 trtype: tcp 00:22:07.852 adrfam: ipv4 00:22:07.852 subtype: current discovery subsystem 00:22:07.852 treq: not specified, sq flow control disable supported 00:22:07.852 portid: 1 00:22:07.852 trsvcid: 4420 00:22:07.852 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:07.852 traddr: 10.0.0.1 00:22:07.852 eflags: none 00:22:07.852 sectype: none 00:22:07.852 =====Discovery Log Entry 1====== 00:22:07.852 trtype: tcp 00:22:07.852 adrfam: ipv4 00:22:07.852 subtype: nvme subsystem 00:22:07.852 treq: not specified, sq flow control disable supported 00:22:07.852 portid: 1 00:22:07.852 trsvcid: 4420 00:22:07.852 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:07.852 traddr: 10.0.0.1 00:22:07.852 eflags: none 00:22:07.852 sectype: none 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:07.852 16:10:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:11.135 Initializing NVMe Controllers 00:22:11.135 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:11.135 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:11.135 Initialization complete. Launching workers. 00:22:11.135 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34636, failed: 0 00:22:11.135 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34636, failed to submit 0 00:22:11.135 success 0, unsuccessful 34636, failed 0 00:22:11.135 16:10:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:11.135 16:10:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:14.475 Initializing NVMe Controllers 00:22:14.475 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:14.475 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:14.475 Initialization complete. Launching workers. 00:22:14.475 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68187, failed: 0 00:22:14.475 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29939, failed to submit 38248 00:22:14.475 success 0, unsuccessful 29939, failed 0 00:22:14.475 16:10:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:22:14.475 16:10:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:17.757 Initializing NVMe Controllers 00:22:17.758 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:17.758 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:22:17.758 Initialization complete. Launching workers. 00:22:17.758 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78049, failed: 0 00:22:17.758 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19478, failed to submit 58571 00:22:17.758 success 0, unsuccessful 19478, failed 0 00:22:17.758 16:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:22:17.758 16:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:17.758 16:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:22:17.758 16:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:17.758 16:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:17.758 16:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:17.758 16:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:17.758 16:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:17.758 16:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:17.758 16:10:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:18.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:19.944 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:19.944 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:19.944 00:22:19.944 real 0m12.927s 00:22:19.944 user 0m6.404s 00:22:19.944 sys 0m4.009s 00:22:19.944 16:10:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.944 16:10:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:22:19.944 ************************************ 00:22:19.944 END TEST kernel_target_abort 00:22:19.944 ************************************ 00:22:19.944 16:10:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:19.944 16:10:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:22:19.944 16:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:19.944 16:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:22:19.944 16:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:19.944 16:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:22:19.944 16:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:19.944 16:10:17 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:19.944 rmmod nvme_tcp 00:22:19.944 rmmod nvme_fabrics 00:22:19.944 rmmod nvme_keyring 00:22:19.944 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:19.944 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:22:19.944 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:22:19.944 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84892 ']' 00:22:19.944 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84892 00:22:19.944 16:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 84892 ']' 00:22:19.944 16:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 84892 00:22:19.944 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84892) - No such process 00:22:19.944 Process with pid 84892 is not found 00:22:19.944 16:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 84892 is not found' 00:22:19.944 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:22:19.945 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:20.203 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:20.203 Waiting for block devices as requested 00:22:20.203 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:20.462 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:20.462 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:20.721 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:20.721 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:20.721 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:20.721 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:20.721 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:20.721 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.721 16:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:20.721 16:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.721 16:10:18 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:22:20.721 00:22:20.721 real 0m27.481s 00:22:20.721 user 0m51.270s 00:22:20.721 sys 0m7.606s 00:22:20.721 16:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:20.721 16:10:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:22:20.721 ************************************ 00:22:20.721 END TEST nvmf_abort_qd_sizes 00:22:20.721 ************************************ 00:22:20.721 16:10:18 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:20.721 16:10:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:20.721 16:10:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:20.721 16:10:18 -- common/autotest_common.sh@10 -- # set +x 00:22:20.721 ************************************ 00:22:20.721 START TEST keyring_file 00:22:20.721 ************************************ 00:22:20.721 16:10:18 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:22:20.980 * Looking for test storage... 00:22:20.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:20.980 16:10:18 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:20.980 16:10:18 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:22:20.980 16:10:18 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:20.980 16:10:19 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@345 -- # : 1 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@353 -- # local d=1 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@355 -- # echo 1 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@353 -- # local d=2 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@355 -- # echo 2 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.980 16:10:19 keyring_file -- scripts/common.sh@368 -- # return 0 00:22:20.980 16:10:19 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.980 16:10:19 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:20.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.980 --rc genhtml_branch_coverage=1 00:22:20.980 --rc genhtml_function_coverage=1 00:22:20.980 --rc genhtml_legend=1 00:22:20.980 --rc geninfo_all_blocks=1 00:22:20.981 --rc geninfo_unexecuted_blocks=1 00:22:20.981 00:22:20.981 ' 00:22:20.981 16:10:19 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:20.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.981 --rc genhtml_branch_coverage=1 00:22:20.981 --rc genhtml_function_coverage=1 00:22:20.981 --rc genhtml_legend=1 00:22:20.981 --rc geninfo_all_blocks=1 00:22:20.981 --rc geninfo_unexecuted_blocks=1 00:22:20.981 00:22:20.981 ' 00:22:20.981 16:10:19 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:20.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.981 --rc genhtml_branch_coverage=1 00:22:20.981 --rc genhtml_function_coverage=1 00:22:20.981 --rc genhtml_legend=1 00:22:20.981 --rc geninfo_all_blocks=1 00:22:20.981 --rc geninfo_unexecuted_blocks=1 00:22:20.981 00:22:20.981 ' 00:22:20.981 16:10:19 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:20.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.981 --rc genhtml_branch_coverage=1 00:22:20.981 --rc genhtml_function_coverage=1 00:22:20.981 --rc genhtml_legend=1 00:22:20.981 --rc geninfo_all_blocks=1 00:22:20.981 --rc geninfo_unexecuted_blocks=1 00:22:20.981 00:22:20.981 ' 00:22:20.981 16:10:19 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:20.981 16:10:19 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:22:20.981 16:10:19 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.981 16:10:19 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.981 16:10:19 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.981 16:10:19 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.981 16:10:19 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.981 16:10:19 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.981 16:10:19 keyring_file -- paths/export.sh@5 -- # export PATH 00:22:20.981 16:10:19 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@51 -- # : 0 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:20.981 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:20.981 16:10:19 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:20.981 16:10:19 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:20.981 16:10:19 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:22:20.981 16:10:19 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:22:20.981 16:10:19 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:22:20.981 16:10:19 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.7OSs12BSvk 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.7OSs12BSvk 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.7OSs12BSvk 00:22:20.981 16:10:19 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.7OSs12BSvk 00:22:20.981 16:10:19 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@17 -- # name=key1 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gUPKMDVOvI 00:22:20.981 16:10:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:20.981 16:10:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:21.240 16:10:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gUPKMDVOvI 00:22:21.240 16:10:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gUPKMDVOvI 00:22:21.240 16:10:19 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.gUPKMDVOvI 00:22:21.240 16:10:19 keyring_file -- keyring/file.sh@30 -- # tgtpid=85808 00:22:21.240 16:10:19 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:21.240 16:10:19 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85808 00:22:21.240 16:10:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85808 ']' 00:22:21.240 16:10:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.240 16:10:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.240 16:10:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.240 16:10:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.240 16:10:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:21.240 [2024-11-20 16:10:19.326697] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:22:21.240 [2024-11-20 16:10:19.326831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85808 ] 00:22:21.240 [2024-11-20 16:10:19.479960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.498 [2024-11-20 16:10:19.553076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.498 [2024-11-20 16:10:19.635756] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:21.756 16:10:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:21.756 [2024-11-20 16:10:19.857469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.756 null0 00:22:21.756 [2024-11-20 16:10:19.889435] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:21.756 [2024-11-20 16:10:19.889673] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.756 16:10:19 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.756 16:10:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:21.757 [2024-11-20 16:10:19.917413] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:22:21.757 request: 00:22:21.757 { 00:22:21.757 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:22:21.757 "secure_channel": false, 00:22:21.757 "listen_address": { 00:22:21.757 "trtype": "tcp", 00:22:21.757 "traddr": "127.0.0.1", 00:22:21.757 "trsvcid": "4420" 00:22:21.757 }, 00:22:21.757 "method": "nvmf_subsystem_add_listener", 00:22:21.757 "req_id": 1 00:22:21.757 } 00:22:21.757 Got JSON-RPC error response 00:22:21.757 response: 00:22:21.757 { 00:22:21.757 "code": -32602, 00:22:21.757 "message": "Invalid parameters" 00:22:21.757 } 00:22:21.757 16:10:19 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:21.757 16:10:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:21.757 16:10:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:21.757 16:10:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:21.757 16:10:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:21.757 16:10:19 keyring_file -- keyring/file.sh@47 -- # bperfpid=85819 00:22:21.757 16:10:19 keyring_file -- keyring/file.sh@49 -- # waitforlisten 85819 /var/tmp/bperf.sock 00:22:21.757 16:10:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 85819 ']' 00:22:21.757 16:10:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:21.757 16:10:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.757 16:10:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:21.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:21.757 16:10:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.757 16:10:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:21.757 16:10:19 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:22:21.757 [2024-11-20 16:10:19.988787] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:22:21.757 [2024-11-20 16:10:19.988928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85819 ] 00:22:22.023 [2024-11-20 16:10:20.142151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.023 [2024-11-20 16:10:20.210671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.023 [2024-11-20 16:10:20.268204] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:22.281 16:10:20 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.281 16:10:20 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:22.281 16:10:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7OSs12BSvk 00:22:22.281 16:10:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7OSs12BSvk 00:22:22.541 16:10:20 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gUPKMDVOvI 00:22:22.541 16:10:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gUPKMDVOvI 00:22:22.799 16:10:20 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:22:22.799 16:10:20 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:22:22.799 16:10:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:22.799 16:10:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:22.799 16:10:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:23.057 16:10:21 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.7OSs12BSvk == \/\t\m\p\/\t\m\p\.\7\O\S\s\1\2\B\S\v\k ]] 00:22:23.057 16:10:21 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:22:23.057 16:10:21 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:22:23.057 16:10:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:23.057 16:10:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:23.057 16:10:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:23.316 16:10:21 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.gUPKMDVOvI == \/\t\m\p\/\t\m\p\.\g\U\P\K\M\D\V\O\v\I ]] 00:22:23.316 16:10:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:22:23.316 16:10:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:23.316 16:10:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:23.316 16:10:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:23.316 16:10:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:23.316 16:10:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:23.883 16:10:21 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:22:23.883 16:10:21 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:22:23.883 16:10:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:23.883 16:10:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:23.883 16:10:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:23.883 16:10:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:23.883 16:10:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:24.142 16:10:22 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:22:24.142 16:10:22 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:24.142 16:10:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:24.402 [2024-11-20 16:10:22.479880] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.402 nvme0n1 00:22:24.402 16:10:22 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:22:24.402 16:10:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:24.402 16:10:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:24.402 16:10:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:24.402 16:10:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:24.402 16:10:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:24.661 16:10:22 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:22:24.661 16:10:22 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:22:24.661 16:10:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:24.661 16:10:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:24.661 16:10:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:24.661 16:10:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:24.661 16:10:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:24.920 16:10:23 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:22:24.920 16:10:23 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:25.178 Running I/O for 1 seconds... 00:22:26.116 11565.00 IOPS, 45.18 MiB/s 00:22:26.116 Latency(us) 00:22:26.116 [2024-11-20T16:10:24.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.116 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:22:26.116 nvme0n1 : 1.01 11595.46 45.29 0.00 0.00 10998.84 6315.29 18945.86 00:22:26.116 [2024-11-20T16:10:24.366Z] =================================================================================================================== 00:22:26.116 [2024-11-20T16:10:24.366Z] Total : 11595.46 45.29 0.00 0.00 10998.84 6315.29 18945.86 00:22:26.116 { 00:22:26.116 "results": [ 00:22:26.116 { 00:22:26.116 "job": "nvme0n1", 00:22:26.116 "core_mask": "0x2", 00:22:26.116 "workload": "randrw", 00:22:26.116 "percentage": 50, 00:22:26.116 "status": "finished", 00:22:26.116 "queue_depth": 128, 00:22:26.116 "io_size": 4096, 00:22:26.116 "runtime": 1.008584, 00:22:26.116 "iops": 11595.46453245342, 00:22:26.116 "mibps": 45.29478332989617, 00:22:26.116 "io_failed": 0, 00:22:26.116 "io_timeout": 0, 00:22:26.116 "avg_latency_us": 10998.841748999184, 00:22:26.116 "min_latency_us": 6315.2872727272725, 00:22:26.116 "max_latency_us": 18945.861818181816 00:22:26.116 } 00:22:26.116 ], 00:22:26.116 "core_count": 1 00:22:26.116 } 00:22:26.116 16:10:24 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:26.116 16:10:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:26.400 16:10:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:22:26.400 16:10:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:26.400 16:10:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:26.400 16:10:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:26.400 16:10:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:26.400 16:10:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:26.691 16:10:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:22:26.691 16:10:24 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:22:26.691 16:10:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:26.691 16:10:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:26.691 16:10:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:26.691 16:10:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:26.691 16:10:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:26.950 16:10:25 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:22:26.950 16:10:25 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:26.950 16:10:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:26.950 16:10:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:26.950 16:10:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:26.950 16:10:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.950 16:10:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:26.950 16:10:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.950 16:10:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:26.950 16:10:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:22:27.209 [2024-11-20 16:10:25.423784] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:27.209 [2024-11-20 16:10:25.424373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bbc60 (107): Transport endpoint is not connected 00:22:27.209 [2024-11-20 16:10:25.425365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bbc60 (9): Bad file descriptor 00:22:27.209 [2024-11-20 16:10:25.426362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:22:27.209 [2024-11-20 16:10:25.426384] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:27.209 [2024-11-20 16:10:25.426395] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:27.209 [2024-11-20 16:10:25.426406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:22:27.209 request: 00:22:27.209 { 00:22:27.209 "name": "nvme0", 00:22:27.209 "trtype": "tcp", 00:22:27.209 "traddr": "127.0.0.1", 00:22:27.209 "adrfam": "ipv4", 00:22:27.209 "trsvcid": "4420", 00:22:27.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:27.209 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:27.209 "prchk_reftag": false, 00:22:27.209 "prchk_guard": false, 00:22:27.209 "hdgst": false, 00:22:27.209 "ddgst": false, 00:22:27.209 "psk": "key1", 00:22:27.209 "allow_unrecognized_csi": false, 00:22:27.209 "method": "bdev_nvme_attach_controller", 00:22:27.209 "req_id": 1 00:22:27.209 } 00:22:27.209 Got JSON-RPC error response 00:22:27.209 response: 00:22:27.209 { 00:22:27.209 "code": -5, 00:22:27.209 "message": "Input/output error" 00:22:27.209 } 00:22:27.209 16:10:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:27.209 16:10:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:27.209 16:10:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:27.209 16:10:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:27.209 16:10:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:22:27.209 16:10:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:27.209 16:10:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:27.209 16:10:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:27.209 16:10:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:27.209 16:10:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:27.468 16:10:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:22:27.468 16:10:25 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:22:27.468 16:10:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:27.468 16:10:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:27.468 16:10:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:27.468 16:10:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:27.468 16:10:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:28.035 16:10:26 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:22:28.035 16:10:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:22:28.035 16:10:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:28.035 16:10:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:22:28.035 16:10:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:22:28.294 16:10:26 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:22:28.294 16:10:26 keyring_file -- keyring/file.sh@78 -- # jq length 00:22:28.294 16:10:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:28.862 16:10:26 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:22:28.862 16:10:26 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.7OSs12BSvk 00:22:28.862 16:10:26 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.7OSs12BSvk 00:22:28.862 16:10:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:28.862 16:10:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.7OSs12BSvk 00:22:28.862 16:10:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:28.862 16:10:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.862 16:10:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:28.862 16:10:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:28.862 16:10:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7OSs12BSvk 00:22:28.862 16:10:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7OSs12BSvk 00:22:29.121 [2024-11-20 16:10:27.166564] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.7OSs12BSvk': 0100660 00:22:29.121 [2024-11-20 16:10:27.166639] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:29.121 request: 00:22:29.121 { 00:22:29.121 "name": "key0", 00:22:29.121 "path": "/tmp/tmp.7OSs12BSvk", 00:22:29.121 "method": "keyring_file_add_key", 00:22:29.121 "req_id": 1 00:22:29.121 } 00:22:29.121 Got JSON-RPC error response 00:22:29.121 response: 00:22:29.121 { 00:22:29.121 "code": -1, 00:22:29.121 "message": "Operation not permitted" 00:22:29.121 } 00:22:29.121 16:10:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:29.121 16:10:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.121 16:10:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.121 16:10:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.121 16:10:27 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.7OSs12BSvk 00:22:29.121 16:10:27 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.7OSs12BSvk 00:22:29.121 16:10:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.7OSs12BSvk 00:22:29.378 16:10:27 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.7OSs12BSvk 00:22:29.378 16:10:27 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:22:29.378 16:10:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:29.378 16:10:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:29.378 16:10:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:29.378 16:10:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:29.378 16:10:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:29.636 16:10:27 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:22:29.636 16:10:27 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:29.636 16:10:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:22:29.636 16:10:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:29.636 16:10:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:29.636 16:10:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.636 16:10:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:29.636 16:10:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.636 16:10:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:29.636 16:10:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:29.895 [2024-11-20 16:10:28.042753] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.7OSs12BSvk': No such file or directory 00:22:29.895 [2024-11-20 16:10:28.042793] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:22:29.895 [2024-11-20 16:10:28.042825] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:22:29.895 [2024-11-20 16:10:28.042837] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:22:29.895 [2024-11-20 16:10:28.042847] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:29.895 [2024-11-20 16:10:28.042857] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:22:29.895 request: 00:22:29.895 { 00:22:29.895 "name": "nvme0", 00:22:29.895 "trtype": "tcp", 00:22:29.895 "traddr": "127.0.0.1", 00:22:29.895 "adrfam": "ipv4", 00:22:29.895 "trsvcid": "4420", 00:22:29.895 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:29.895 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:29.895 "prchk_reftag": false, 00:22:29.895 "prchk_guard": false, 00:22:29.895 "hdgst": false, 00:22:29.895 "ddgst": false, 00:22:29.895 "psk": "key0", 00:22:29.895 "allow_unrecognized_csi": false, 00:22:29.895 "method": "bdev_nvme_attach_controller", 00:22:29.895 "req_id": 1 00:22:29.895 } 00:22:29.895 Got JSON-RPC error response 00:22:29.895 response: 00:22:29.895 { 00:22:29.895 "code": -19, 00:22:29.895 "message": "No such device" 00:22:29.895 } 00:22:29.895 16:10:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:22:29.895 16:10:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.895 16:10:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.895 16:10:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.895 16:10:28 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:22:29.895 16:10:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:30.153 16:10:28 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:22:30.153 16:10:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:22:30.153 16:10:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:22:30.153 16:10:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:30.153 16:10:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:22:30.153 16:10:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:22:30.153 16:10:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.blXwTg69QY 00:22:30.153 16:10:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:30.153 16:10:28 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:30.153 16:10:28 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:22:30.153 16:10:28 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:30.153 16:10:28 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:30.153 16:10:28 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:22:30.153 16:10:28 keyring_file -- nvmf/common.sh@733 -- # python - 00:22:30.153 16:10:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.blXwTg69QY 00:22:30.153 16:10:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.blXwTg69QY 00:22:30.153 16:10:28 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.blXwTg69QY 00:22:30.153 16:10:28 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.blXwTg69QY 00:22:30.153 16:10:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.blXwTg69QY 00:22:30.411 16:10:28 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:30.412 16:10:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:30.978 nvme0n1 00:22:30.978 16:10:28 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:22:30.978 16:10:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:30.978 16:10:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:30.978 16:10:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:30.978 16:10:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:30.978 16:10:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:31.236 16:10:29 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:22:31.236 16:10:29 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:22:31.236 16:10:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:22:31.495 16:10:29 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:22:31.495 16:10:29 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:22:31.495 16:10:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:31.495 16:10:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:31.495 16:10:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:31.753 16:10:29 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:22:31.753 16:10:29 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:22:31.753 16:10:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:31.753 16:10:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:31.753 16:10:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:31.753 16:10:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:31.753 16:10:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:32.011 16:10:30 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:22:32.011 16:10:30 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:32.011 16:10:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:32.269 16:10:30 keyring_file -- keyring/file.sh@105 -- # jq length 00:22:32.269 16:10:30 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:22:32.269 16:10:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:32.527 16:10:30 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:22:32.527 16:10:30 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.blXwTg69QY 00:22:32.527 16:10:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.blXwTg69QY 00:22:32.786 16:10:30 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.gUPKMDVOvI 00:22:32.786 16:10:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.gUPKMDVOvI 00:22:33.045 16:10:31 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:33.045 16:10:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:22:33.612 nvme0n1 00:22:33.612 16:10:31 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:22:33.612 16:10:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:22:33.871 16:10:31 keyring_file -- keyring/file.sh@113 -- # config='{ 00:22:33.871 "subsystems": [ 00:22:33.871 { 00:22:33.871 "subsystem": "keyring", 00:22:33.871 "config": [ 00:22:33.871 { 00:22:33.871 "method": "keyring_file_add_key", 00:22:33.871 "params": { 00:22:33.871 "name": "key0", 00:22:33.871 "path": "/tmp/tmp.blXwTg69QY" 00:22:33.871 } 00:22:33.871 }, 00:22:33.871 { 00:22:33.871 "method": "keyring_file_add_key", 00:22:33.871 "params": { 00:22:33.871 "name": "key1", 00:22:33.871 "path": "/tmp/tmp.gUPKMDVOvI" 00:22:33.871 } 00:22:33.871 } 00:22:33.871 ] 00:22:33.871 }, 00:22:33.871 { 00:22:33.871 "subsystem": "iobuf", 00:22:33.871 "config": [ 00:22:33.871 { 00:22:33.871 "method": "iobuf_set_options", 00:22:33.871 "params": { 00:22:33.871 "small_pool_count": 8192, 00:22:33.871 "large_pool_count": 1024, 00:22:33.871 "small_bufsize": 8192, 00:22:33.871 "large_bufsize": 135168, 00:22:33.871 "enable_numa": false 00:22:33.871 } 00:22:33.871 } 00:22:33.871 ] 00:22:33.871 }, 00:22:33.871 { 00:22:33.871 "subsystem": "sock", 00:22:33.871 "config": [ 00:22:33.871 { 00:22:33.871 "method": "sock_set_default_impl", 00:22:33.871 "params": { 00:22:33.871 "impl_name": "uring" 00:22:33.871 } 00:22:33.871 }, 00:22:33.871 { 00:22:33.871 "method": "sock_impl_set_options", 00:22:33.871 "params": { 00:22:33.871 "impl_name": "ssl", 00:22:33.871 "recv_buf_size": 4096, 00:22:33.871 "send_buf_size": 4096, 00:22:33.871 "enable_recv_pipe": true, 00:22:33.871 "enable_quickack": false, 00:22:33.871 "enable_placement_id": 0, 00:22:33.871 "enable_zerocopy_send_server": true, 00:22:33.871 "enable_zerocopy_send_client": false, 00:22:33.871 "zerocopy_threshold": 0, 00:22:33.871 "tls_version": 0, 00:22:33.871 "enable_ktls": false 00:22:33.871 } 00:22:33.871 }, 00:22:33.871 { 00:22:33.871 "method": "sock_impl_set_options", 00:22:33.871 "params": { 00:22:33.871 "impl_name": "posix", 00:22:33.871 "recv_buf_size": 2097152, 00:22:33.871 "send_buf_size": 2097152, 00:22:33.871 "enable_recv_pipe": true, 00:22:33.871 "enable_quickack": false, 00:22:33.871 "enable_placement_id": 0, 00:22:33.871 "enable_zerocopy_send_server": true, 00:22:33.871 "enable_zerocopy_send_client": false, 00:22:33.871 "zerocopy_threshold": 0, 00:22:33.871 "tls_version": 0, 00:22:33.871 "enable_ktls": false 00:22:33.871 } 00:22:33.871 }, 00:22:33.871 { 00:22:33.871 "method": "sock_impl_set_options", 00:22:33.871 "params": { 00:22:33.871 "impl_name": "uring", 00:22:33.871 "recv_buf_size": 2097152, 00:22:33.871 "send_buf_size": 2097152, 00:22:33.871 "enable_recv_pipe": true, 00:22:33.871 "enable_quickack": false, 00:22:33.871 "enable_placement_id": 0, 00:22:33.871 "enable_zerocopy_send_server": false, 00:22:33.871 "enable_zerocopy_send_client": false, 00:22:33.871 "zerocopy_threshold": 0, 00:22:33.871 "tls_version": 0, 00:22:33.871 "enable_ktls": false 00:22:33.871 } 00:22:33.871 } 00:22:33.871 ] 00:22:33.871 }, 00:22:33.871 { 00:22:33.871 "subsystem": "vmd", 00:22:33.871 "config": [] 00:22:33.871 }, 00:22:33.871 { 00:22:33.871 "subsystem": "accel", 00:22:33.871 "config": [ 00:22:33.871 { 00:22:33.871 "method": "accel_set_options", 00:22:33.871 "params": { 00:22:33.871 "small_cache_size": 128, 00:22:33.871 "large_cache_size": 16, 00:22:33.871 "task_count": 2048, 00:22:33.871 "sequence_count": 2048, 00:22:33.871 "buf_count": 2048 00:22:33.871 } 00:22:33.871 } 00:22:33.871 ] 00:22:33.871 }, 00:22:33.871 { 00:22:33.871 "subsystem": "bdev", 00:22:33.871 "config": [ 00:22:33.871 { 00:22:33.871 "method": "bdev_set_options", 00:22:33.871 "params": { 00:22:33.871 "bdev_io_pool_size": 65535, 00:22:33.871 "bdev_io_cache_size": 256, 00:22:33.871 "bdev_auto_examine": true, 00:22:33.871 "iobuf_small_cache_size": 128, 00:22:33.871 "iobuf_large_cache_size": 16 00:22:33.871 } 00:22:33.871 }, 00:22:33.871 { 00:22:33.872 "method": "bdev_raid_set_options", 00:22:33.872 "params": { 00:22:33.872 "process_window_size_kb": 1024, 00:22:33.872 "process_max_bandwidth_mb_sec": 0 00:22:33.872 } 00:22:33.872 }, 00:22:33.872 { 00:22:33.872 "method": "bdev_iscsi_set_options", 00:22:33.872 "params": { 00:22:33.872 "timeout_sec": 30 00:22:33.872 } 00:22:33.872 }, 00:22:33.872 { 00:22:33.872 "method": "bdev_nvme_set_options", 00:22:33.872 "params": { 00:22:33.872 "action_on_timeout": "none", 00:22:33.872 "timeout_us": 0, 00:22:33.872 "timeout_admin_us": 0, 00:22:33.872 "keep_alive_timeout_ms": 10000, 00:22:33.872 "arbitration_burst": 0, 00:22:33.872 "low_priority_weight": 0, 00:22:33.872 "medium_priority_weight": 0, 00:22:33.872 "high_priority_weight": 0, 00:22:33.872 "nvme_adminq_poll_period_us": 10000, 00:22:33.872 "nvme_ioq_poll_period_us": 0, 00:22:33.872 "io_queue_requests": 512, 00:22:33.872 "delay_cmd_submit": true, 00:22:33.872 "transport_retry_count": 4, 00:22:33.872 "bdev_retry_count": 3, 00:22:33.872 "transport_ack_timeout": 0, 00:22:33.872 "ctrlr_loss_timeout_sec": 0, 00:22:33.872 "reconnect_delay_sec": 0, 00:22:33.872 "fast_io_fail_timeout_sec": 0, 00:22:33.872 "disable_auto_failback": false, 00:22:33.872 "generate_uuids": false, 00:22:33.872 "transport_tos": 0, 00:22:33.872 "nvme_error_stat": false, 00:22:33.872 "rdma_srq_size": 0, 00:22:33.872 "io_path_stat": false, 00:22:33.872 "allow_accel_sequence": false, 00:22:33.872 "rdma_max_cq_size": 0, 00:22:33.872 "rdma_cm_event_timeout_ms": 0, 00:22:33.872 "dhchap_digests": [ 00:22:33.872 "sha256", 00:22:33.872 "sha384", 00:22:33.872 "sha512" 00:22:33.872 ], 00:22:33.872 "dhchap_dhgroups": [ 00:22:33.872 "null", 00:22:33.872 "ffdhe2048", 00:22:33.872 "ffdhe3072", 00:22:33.872 "ffdhe4096", 00:22:33.872 "ffdhe6144", 00:22:33.872 "ffdhe8192" 00:22:33.872 ] 00:22:33.872 } 00:22:33.872 }, 00:22:33.872 { 00:22:33.872 "method": "bdev_nvme_attach_controller", 00:22:33.872 "params": { 00:22:33.872 "name": "nvme0", 00:22:33.872 "trtype": "TCP", 00:22:33.872 "adrfam": "IPv4", 00:22:33.872 "traddr": "127.0.0.1", 00:22:33.872 "trsvcid": "4420", 00:22:33.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:33.872 "prchk_reftag": false, 00:22:33.872 "prchk_guard": false, 00:22:33.872 "ctrlr_loss_timeout_sec": 0, 00:22:33.872 "reconnect_delay_sec": 0, 00:22:33.872 "fast_io_fail_timeout_sec": 0, 00:22:33.872 "psk": "key0", 00:22:33.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:33.872 "hdgst": false, 00:22:33.872 "ddgst": false, 00:22:33.872 "multipath": "multipath" 00:22:33.872 } 00:22:33.872 }, 00:22:33.872 { 00:22:33.872 "method": "bdev_nvme_set_hotplug", 00:22:33.872 "params": { 00:22:33.872 "period_us": 100000, 00:22:33.872 "enable": false 00:22:33.872 } 00:22:33.872 }, 00:22:33.872 { 00:22:33.872 "method": "bdev_wait_for_examine" 00:22:33.872 } 00:22:33.872 ] 00:22:33.872 }, 00:22:33.872 { 00:22:33.872 "subsystem": "nbd", 00:22:33.872 "config": [] 00:22:33.872 } 00:22:33.872 ] 00:22:33.872 }' 00:22:33.872 16:10:31 keyring_file -- keyring/file.sh@115 -- # killprocess 85819 00:22:33.872 16:10:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85819 ']' 00:22:33.872 16:10:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85819 00:22:33.872 16:10:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:22:33.872 16:10:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:33.872 16:10:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85819 00:22:33.872 killing process with pid 85819 00:22:33.872 Received shutdown signal, test time was about 1.000000 seconds 00:22:33.872 00:22:33.872 Latency(us) 00:22:33.872 [2024-11-20T16:10:32.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.872 [2024-11-20T16:10:32.122Z] =================================================================================================================== 00:22:33.872 [2024-11-20T16:10:32.122Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.872 16:10:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:33.872 16:10:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:33.872 16:10:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85819' 00:22:33.872 16:10:31 keyring_file -- common/autotest_common.sh@973 -- # kill 85819 00:22:33.872 16:10:31 keyring_file -- common/autotest_common.sh@978 -- # wait 85819 00:22:34.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:34.132 16:10:32 keyring_file -- keyring/file.sh@118 -- # bperfpid=86073 00:22:34.132 16:10:32 keyring_file -- keyring/file.sh@120 -- # waitforlisten 86073 /var/tmp/bperf.sock 00:22:34.132 16:10:32 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 86073 ']' 00:22:34.132 16:10:32 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:34.132 16:10:32 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.132 16:10:32 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:22:34.132 16:10:32 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:34.132 16:10:32 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:22:34.132 "subsystems": [ 00:22:34.132 { 00:22:34.132 "subsystem": "keyring", 00:22:34.132 "config": [ 00:22:34.132 { 00:22:34.132 "method": "keyring_file_add_key", 00:22:34.132 "params": { 00:22:34.132 "name": "key0", 00:22:34.132 "path": "/tmp/tmp.blXwTg69QY" 00:22:34.132 } 00:22:34.132 }, 00:22:34.132 { 00:22:34.132 "method": "keyring_file_add_key", 00:22:34.132 "params": { 00:22:34.132 "name": "key1", 00:22:34.132 "path": "/tmp/tmp.gUPKMDVOvI" 00:22:34.132 } 00:22:34.132 } 00:22:34.132 ] 00:22:34.132 }, 00:22:34.132 { 00:22:34.132 "subsystem": "iobuf", 00:22:34.132 "config": [ 00:22:34.132 { 00:22:34.132 "method": "iobuf_set_options", 00:22:34.132 "params": { 00:22:34.132 "small_pool_count": 8192, 00:22:34.132 "large_pool_count": 1024, 00:22:34.132 "small_bufsize": 8192, 00:22:34.132 "large_bufsize": 135168, 00:22:34.132 "enable_numa": false 00:22:34.132 } 00:22:34.132 } 00:22:34.132 ] 00:22:34.132 }, 00:22:34.132 { 00:22:34.132 "subsystem": "sock", 00:22:34.132 "config": [ 00:22:34.132 { 00:22:34.132 "method": "sock_set_default_impl", 00:22:34.132 "params": { 00:22:34.132 "impl_name": "uring" 00:22:34.132 } 00:22:34.132 }, 00:22:34.132 { 00:22:34.132 "method": "sock_impl_set_options", 00:22:34.132 "params": { 00:22:34.132 "impl_name": "ssl", 00:22:34.132 "recv_buf_size": 4096, 00:22:34.132 "send_buf_size": 4096, 00:22:34.132 "enable_recv_pipe": true, 00:22:34.132 "enable_quickack": false, 00:22:34.132 "enable_placement_id": 0, 00:22:34.132 "enable_zerocopy_send_server": true, 00:22:34.132 "enable_zerocopy_send_client": false, 00:22:34.132 "zerocopy_threshold": 0, 00:22:34.132 "tls_version": 0, 00:22:34.132 "enable_ktls": false 00:22:34.132 } 00:22:34.132 }, 00:22:34.132 { 00:22:34.132 "method": "sock_impl_set_options", 00:22:34.132 "params": { 00:22:34.132 "impl_name": "posix", 00:22:34.132 "recv_buf_size": 2097152, 00:22:34.132 "send_buf_size": 2097152, 00:22:34.132 "enable_recv_pipe": true, 00:22:34.132 "enable_quickack": false, 00:22:34.132 "enable_placement_id": 0, 00:22:34.132 "enable_zerocopy_send_server": true, 00:22:34.132 "enable_zerocopy_send_client": false, 00:22:34.132 "zerocopy_threshold": 0, 00:22:34.132 "tls_version": 0, 00:22:34.132 "enable_ktls": false 00:22:34.132 } 00:22:34.132 }, 00:22:34.132 { 00:22:34.132 "method": "sock_impl_set_options", 00:22:34.132 "params": { 00:22:34.132 "impl_name": "uring", 00:22:34.132 "recv_buf_size": 2097152, 00:22:34.132 "send_buf_size": 2097152, 00:22:34.132 "enable_recv_pipe": true, 00:22:34.132 "enable_quickack": false, 00:22:34.132 "enable_placement_id": 0, 00:22:34.132 "enable_zerocopy_send_server": false, 00:22:34.132 "enable_zerocopy_send_client": false, 00:22:34.132 "zerocopy_threshold": 0, 00:22:34.132 "tls_version": 0, 00:22:34.132 "enable_ktls": false 00:22:34.132 } 00:22:34.132 } 00:22:34.132 ] 00:22:34.132 }, 00:22:34.132 { 00:22:34.132 "subsystem": "vmd", 00:22:34.132 "config": [] 00:22:34.132 }, 00:22:34.132 { 00:22:34.132 "subsystem": "accel", 00:22:34.132 "config": [ 00:22:34.132 { 00:22:34.132 "method": "accel_set_options", 00:22:34.132 "params": { 00:22:34.132 "small_cache_size": 128, 00:22:34.132 "large_cache_size": 16, 00:22:34.132 "task_count": 2048, 00:22:34.132 "sequence_count": 2048, 00:22:34.132 "buf_count": 2048 00:22:34.132 } 00:22:34.132 } 00:22:34.132 ] 00:22:34.132 }, 00:22:34.132 { 00:22:34.132 "subsystem": "bdev", 00:22:34.132 "config": [ 00:22:34.132 { 00:22:34.132 "method": "bdev_set_options", 00:22:34.133 "params": { 00:22:34.133 "bdev_io_pool_size": 65535, 00:22:34.133 "bdev_io_cache_size": 256, 00:22:34.133 "bdev_auto_examine": true, 00:22:34.133 "iobuf_small_cache_size": 128, 00:22:34.133 "iobuf_large_cache_size": 16 00:22:34.133 } 00:22:34.133 }, 00:22:34.133 { 00:22:34.133 "method": "bdev_raid_set_options", 00:22:34.133 "params": { 00:22:34.133 "process_window_size_kb": 1024, 00:22:34.133 "process_max_bandwidth_mb_sec": 0 00:22:34.133 } 00:22:34.133 }, 00:22:34.133 { 00:22:34.133 "method": "bdev_iscsi_set_options", 00:22:34.133 "params": { 00:22:34.133 "timeout_sec": 30 00:22:34.133 } 00:22:34.133 }, 00:22:34.133 { 00:22:34.133 "method": "bdev_nvme_set_options", 00:22:34.133 "params": { 00:22:34.133 "action_on_timeout": "none", 00:22:34.133 "timeout_us": 0, 00:22:34.133 "timeout_admin_us": 0, 00:22:34.133 "keep_alive_timeout_ms": 10000, 00:22:34.133 "arbitration_burst": 0, 00:22:34.133 "low_priority_weight": 0, 00:22:34.133 "medium_priority_weight": 0, 00:22:34.133 "high_priority_weight": 0, 00:22:34.133 "nvme_adminq_poll_period_us": 10000, 00:22:34.133 "nvme_ioq_poll_period_us": 0, 00:22:34.133 "io_queue_requests": 512, 00:22:34.133 "delay_cmd_submit": true, 00:22:34.133 "transport_retry_count": 4, 00:22:34.133 "bdev_retry_count": 3, 00:22:34.133 "transport_ack_timeout": 0, 00:22:34.133 "ctrlr_loss_timeout_sec": 0, 00:22:34.133 "reconnect_delay_sec": 0, 00:22:34.133 "fast_io_fail_timeout_sec": 0, 00:22:34.133 "disable_auto_failback": false, 00:22:34.133 "generate_uuids": false, 00:22:34.133 "transport_tos": 0, 00:22:34.133 "nvme_error_stat": false, 00:22:34.133 "rdma_srq_size": 0, 00:22:34.133 "io_path_stat": false, 00:22:34.133 "allow_accel_sequence": false, 00:22:34.133 "rdma_max_cq_size": 0, 00:22:34.133 "rdma_cm_event_timeout_ms": 0, 00:22:34.133 "dhchap_digests": [ 00:22:34.133 "sha256", 00:22:34.133 "sha384", 00:22:34.133 "sha512" 00:22:34.133 ], 00:22:34.133 "dhchap_dhgroups": [ 00:22:34.133 "null", 00:22:34.133 "ffdhe2048", 00:22:34.133 "ffdhe3072", 00:22:34.133 "ffdhe4096", 00:22:34.133 "ffdhe6144", 00:22:34.133 "ffdhe8192" 00:22:34.133 ] 00:22:34.133 } 00:22:34.133 }, 00:22:34.133 { 00:22:34.133 "method": "bdev_nvme_attach_controller", 00:22:34.133 "params": { 00:22:34.133 "name": "nvme0", 00:22:34.133 "trtype": "TCP", 00:22:34.133 "adrfam": "IPv4", 00:22:34.133 "traddr": "127.0.0.1", 00:22:34.133 "trsvcid": "4420", 00:22:34.133 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:34.133 "prchk_reftag": false, 00:22:34.133 "prchk_guard": false, 00:22:34.133 "ctrlr_loss_timeout_sec": 0, 00:22:34.133 "reconnect_delay_sec": 0, 00:22:34.133 "fast_io_fail_timeout_sec": 0, 00:22:34.133 "psk": "key0", 00:22:34.133 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:34.133 "hdgst": false, 00:22:34.133 "ddgst": false, 00:22:34.133 "multipath": "multipath" 00:22:34.133 } 00:22:34.133 }, 00:22:34.133 { 00:22:34.133 "method": "bdev_nvme_set_hotplug", 00:22:34.133 "params": { 00:22:34.133 "period_us": 100000, 00:22:34.133 "enable": false 00:22:34.133 } 00:22:34.133 }, 00:22:34.133 { 00:22:34.133 "method": "bdev_wait_for_examine" 00:22:34.133 } 00:22:34.133 ] 00:22:34.133 }, 00:22:34.133 { 00:22:34.133 "subsystem": "nbd", 00:22:34.133 "config": [] 00:22:34.133 } 00:22:34.133 ] 00:22:34.133 }' 00:22:34.133 16:10:32 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.133 16:10:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:34.133 [2024-11-20 16:10:32.246701] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:22:34.133 [2024-11-20 16:10:32.246965] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86073 ] 00:22:34.391 [2024-11-20 16:10:32.385623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.391 [2024-11-20 16:10:32.437505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.391 [2024-11-20 16:10:32.573346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:34.391 [2024-11-20 16:10:32.631334] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:35.351 16:10:33 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.351 16:10:33 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:22:35.351 16:10:33 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:22:35.351 16:10:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:35.351 16:10:33 keyring_file -- keyring/file.sh@121 -- # jq length 00:22:35.351 16:10:33 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:22:35.351 16:10:33 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:22:35.351 16:10:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:35.351 16:10:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:22:35.351 16:10:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:22:35.351 16:10:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:35.351 16:10:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:35.610 16:10:33 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:22:35.610 16:10:33 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:22:35.610 16:10:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:22:35.610 16:10:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:22:35.610 16:10:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:35.610 16:10:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:35.610 16:10:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:22:36.176 16:10:34 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:22:36.177 16:10:34 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:22:36.177 16:10:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:22:36.177 16:10:34 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:22:36.434 16:10:34 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:22:36.434 16:10:34 keyring_file -- keyring/file.sh@1 -- # cleanup 00:22:36.434 16:10:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.blXwTg69QY /tmp/tmp.gUPKMDVOvI 00:22:36.434 16:10:34 keyring_file -- keyring/file.sh@20 -- # killprocess 86073 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 86073 ']' 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@958 -- # kill -0 86073 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@959 -- # uname 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86073 00:22:36.434 killing process with pid 86073 00:22:36.434 Received shutdown signal, test time was about 1.000000 seconds 00:22:36.434 00:22:36.434 Latency(us) 00:22:36.434 [2024-11-20T16:10:34.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.434 [2024-11-20T16:10:34.684Z] =================================================================================================================== 00:22:36.434 [2024-11-20T16:10:34.684Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86073' 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@973 -- # kill 86073 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@978 -- # wait 86073 00:22:36.434 16:10:34 keyring_file -- keyring/file.sh@21 -- # killprocess 85808 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 85808 ']' 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@958 -- # kill -0 85808 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@959 -- # uname 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:36.434 16:10:34 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85808 00:22:36.697 killing process with pid 85808 00:22:36.697 16:10:34 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:36.697 16:10:34 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:36.697 16:10:34 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85808' 00:22:36.697 16:10:34 keyring_file -- common/autotest_common.sh@973 -- # kill 85808 00:22:36.697 16:10:34 keyring_file -- common/autotest_common.sh@978 -- # wait 85808 00:22:36.956 00:22:36.956 real 0m16.210s 00:22:36.956 user 0m41.416s 00:22:36.956 sys 0m3.076s 00:22:36.956 16:10:35 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:36.956 ************************************ 00:22:36.956 END TEST keyring_file 00:22:36.956 ************************************ 00:22:36.956 16:10:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:22:36.956 16:10:35 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:22:36.956 16:10:35 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:36.956 16:10:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:36.956 16:10:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:36.956 16:10:35 -- common/autotest_common.sh@10 -- # set +x 00:22:36.956 ************************************ 00:22:36.956 START TEST keyring_linux 00:22:36.956 ************************************ 00:22:36.956 16:10:35 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:22:36.956 Joined session keyring: 641472565 00:22:37.216 * Looking for test storage... 00:22:37.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:22:37.216 16:10:35 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:37.216 16:10:35 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:22:37.216 16:10:35 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:37.216 16:10:35 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@345 -- # : 1 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.216 16:10:35 keyring_linux -- scripts/common.sh@368 -- # return 0 00:22:37.216 16:10:35 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.216 16:10:35 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:37.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.216 --rc genhtml_branch_coverage=1 00:22:37.216 --rc genhtml_function_coverage=1 00:22:37.216 --rc genhtml_legend=1 00:22:37.216 --rc geninfo_all_blocks=1 00:22:37.216 --rc geninfo_unexecuted_blocks=1 00:22:37.216 00:22:37.216 ' 00:22:37.216 16:10:35 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:37.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.216 --rc genhtml_branch_coverage=1 00:22:37.216 --rc genhtml_function_coverage=1 00:22:37.216 --rc genhtml_legend=1 00:22:37.216 --rc geninfo_all_blocks=1 00:22:37.216 --rc geninfo_unexecuted_blocks=1 00:22:37.216 00:22:37.216 ' 00:22:37.216 16:10:35 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:37.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.216 --rc genhtml_branch_coverage=1 00:22:37.216 --rc genhtml_function_coverage=1 00:22:37.216 --rc genhtml_legend=1 00:22:37.216 --rc geninfo_all_blocks=1 00:22:37.216 --rc geninfo_unexecuted_blocks=1 00:22:37.216 00:22:37.216 ' 00:22:37.216 16:10:35 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:37.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.216 --rc genhtml_branch_coverage=1 00:22:37.216 --rc genhtml_function_coverage=1 00:22:37.216 --rc genhtml_legend=1 00:22:37.216 --rc geninfo_all_blocks=1 00:22:37.216 --rc geninfo_unexecuted_blocks=1 00:22:37.216 00:22:37.216 ' 00:22:37.216 16:10:35 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ca768c1a-78f6-4242-8009-85e76e7a8123 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=ca768c1a-78f6-4242-8009-85e76e7a8123 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:37.217 16:10:35 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.217 16:10:35 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.217 16:10:35 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.217 16:10:35 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.217 16:10:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.217 16:10:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.217 16:10:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.217 16:10:35 keyring_linux -- paths/export.sh@5 -- # export PATH 00:22:37.217 16:10:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.217 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:22:37.217 16:10:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:22:37.217 16:10:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:22:37.217 16:10:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:22:37.217 16:10:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:22:37.217 16:10:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:22:37.217 16:10:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@733 -- # python - 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:22:37.217 /tmp/:spdk-test:key0 00:22:37.217 16:10:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:22:37.217 16:10:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:22:37.217 16:10:35 keyring_linux -- nvmf/common.sh@733 -- # python - 00:22:37.476 16:10:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:22:37.476 16:10:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:22:37.476 /tmp/:spdk-test:key1 00:22:37.476 16:10:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=86199 00:22:37.476 16:10:35 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:37.476 16:10:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 86199 00:22:37.476 16:10:35 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86199 ']' 00:22:37.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.476 16:10:35 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.476 16:10:35 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.476 16:10:35 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.476 16:10:35 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.476 16:10:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:37.476 [2024-11-20 16:10:35.563772] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:22:37.476 [2024-11-20 16:10:35.564148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86199 ] 00:22:37.476 [2024-11-20 16:10:35.717556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.735 [2024-11-20 16:10:35.786286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.735 [2024-11-20 16:10:35.865375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:38.672 16:10:36 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.672 16:10:36 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:22:38.672 16:10:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:22:38.672 16:10:36 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.672 16:10:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 [2024-11-20 16:10:36.610525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.672 null0 00:22:38.672 [2024-11-20 16:10:36.642501] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:38.672 [2024-11-20 16:10:36.642694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:38.672 16:10:36 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.672 16:10:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:22:38.672 713130789 00:22:38.672 16:10:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:22:38.672 58349772 00:22:38.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:38.672 16:10:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=86217 00:22:38.672 16:10:36 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:22:38.672 16:10:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 86217 /var/tmp/bperf.sock 00:22:38.672 16:10:36 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 86217 ']' 00:22:38.672 16:10:36 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:38.672 16:10:36 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.672 16:10:36 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:38.672 16:10:36 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.672 16:10:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:38.672 [2024-11-20 16:10:36.733324] Starting SPDK v25.01-pre git sha1 0728de5b0 / DPDK 24.03.0 initialization... 00:22:38.672 [2024-11-20 16:10:36.733608] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86217 ] 00:22:38.672 [2024-11-20 16:10:36.884133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.930 [2024-11-20 16:10:36.943098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.930 16:10:36 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.930 16:10:36 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:22:38.930 16:10:36 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:22:38.930 16:10:36 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:22:39.188 16:10:37 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:22:39.188 16:10:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:39.446 [2024-11-20 16:10:37.551086] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:39.446 16:10:37 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:39.446 16:10:37 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:22:39.704 [2024-11-20 16:10:37.910882] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.962 nvme0n1 00:22:39.962 16:10:38 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:22:39.962 16:10:38 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:22:39.962 16:10:38 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:39.962 16:10:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:39.962 16:10:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:39.962 16:10:38 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:40.221 16:10:38 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:22:40.221 16:10:38 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:40.222 16:10:38 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:22:40.222 16:10:38 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:22:40.222 16:10:38 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:22:40.222 16:10:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:40.222 16:10:38 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:40.481 16:10:38 keyring_linux -- keyring/linux.sh@25 -- # sn=713130789 00:22:40.481 16:10:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:40.481 16:10:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:40.481 16:10:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 713130789 == \7\1\3\1\3\0\7\8\9 ]] 00:22:40.481 16:10:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 713130789 00:22:40.481 16:10:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:40.481 16:10:38 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:40.481 Running I/O for 1 seconds... 00:22:41.857 13883.00 IOPS, 54.23 MiB/s 00:22:41.857 Latency(us) 00:22:41.857 [2024-11-20T16:10:40.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.857 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:41.857 nvme0n1 : 1.01 13887.61 54.25 0.00 0.00 9170.59 4974.78 14477.50 00:22:41.857 [2024-11-20T16:10:40.107Z] =================================================================================================================== 00:22:41.857 [2024-11-20T16:10:40.107Z] Total : 13887.61 54.25 0.00 0.00 9170.59 4974.78 14477.50 00:22:41.857 { 00:22:41.857 "results": [ 00:22:41.857 { 00:22:41.857 "job": "nvme0n1", 00:22:41.857 "core_mask": "0x2", 00:22:41.857 "workload": "randread", 00:22:41.857 "status": "finished", 00:22:41.857 "queue_depth": 128, 00:22:41.857 "io_size": 4096, 00:22:41.857 "runtime": 1.008957, 00:22:41.857 "iops": 13887.60868897287, 00:22:41.857 "mibps": 54.248471441300275, 00:22:41.857 "io_failed": 0, 00:22:41.857 "io_timeout": 0, 00:22:41.857 "avg_latency_us": 9170.58609295928, 00:22:41.857 "min_latency_us": 4974.778181818182, 00:22:41.857 "max_latency_us": 14477.498181818182 00:22:41.857 } 00:22:41.857 ], 00:22:41.857 "core_count": 1 00:22:41.857 } 00:22:41.857 16:10:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:41.857 16:10:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:41.857 16:10:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:41.857 16:10:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:41.857 16:10:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:41.857 16:10:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:41.857 16:10:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:41.857 16:10:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:42.424 16:10:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:42.424 16:10:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:42.424 16:10:40 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:42.424 16:10:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:42.424 16:10:40 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:22:42.424 16:10:40 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:42.424 16:10:40 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:22:42.424 16:10:40 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:42.424 16:10:40 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:22:42.424 16:10:40 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:42.424 16:10:40 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:42.424 16:10:40 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:42.683 [2024-11-20 16:10:40.686499] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:42.683 [2024-11-20 16:10:40.687183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb15d0 (107): Transport endpoint is not connected 00:22:42.683 [2024-11-20 16:10:40.688181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb15d0 (9): Bad file descriptor 00:22:42.683 [2024-11-20 16:10:40.689177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:22:42.683 [2024-11-20 16:10:40.689199] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:42.683 [2024-11-20 16:10:40.689226] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:22:42.683 [2024-11-20 16:10:40.689237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:22:42.683 request: 00:22:42.683 { 00:22:42.683 "name": "nvme0", 00:22:42.683 "trtype": "tcp", 00:22:42.683 "traddr": "127.0.0.1", 00:22:42.683 "adrfam": "ipv4", 00:22:42.683 "trsvcid": "4420", 00:22:42.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:42.683 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:42.683 "prchk_reftag": false, 00:22:42.683 "prchk_guard": false, 00:22:42.683 "hdgst": false, 00:22:42.683 "ddgst": false, 00:22:42.683 "psk": ":spdk-test:key1", 00:22:42.683 "allow_unrecognized_csi": false, 00:22:42.683 "method": "bdev_nvme_attach_controller", 00:22:42.683 "req_id": 1 00:22:42.683 } 00:22:42.683 Got JSON-RPC error response 00:22:42.683 response: 00:22:42.683 { 00:22:42.683 "code": -5, 00:22:42.683 "message": "Input/output error" 00:22:42.683 } 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@33 -- # sn=713130789 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 713130789 00:22:42.683 1 links removed 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@33 -- # sn=58349772 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 58349772 00:22:42.683 1 links removed 00:22:42.683 16:10:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 86217 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86217 ']' 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86217 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86217 00:22:42.683 killing process with pid 86217 00:22:42.683 Received shutdown signal, test time was about 1.000000 seconds 00:22:42.683 00:22:42.683 Latency(us) 00:22:42.683 [2024-11-20T16:10:40.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.683 [2024-11-20T16:10:40.933Z] =================================================================================================================== 00:22:42.683 [2024-11-20T16:10:40.933Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86217' 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@973 -- # kill 86217 00:22:42.683 16:10:40 keyring_linux -- common/autotest_common.sh@978 -- # wait 86217 00:22:42.942 16:10:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 86199 00:22:42.942 16:10:40 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 86199 ']' 00:22:42.942 16:10:40 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 86199 00:22:42.942 16:10:40 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:22:42.942 16:10:40 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.942 16:10:40 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86199 00:22:42.942 killing process with pid 86199 00:22:42.942 16:10:40 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:42.942 16:10:40 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:42.942 16:10:40 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86199' 00:22:42.942 16:10:40 keyring_linux -- common/autotest_common.sh@973 -- # kill 86199 00:22:42.942 16:10:40 keyring_linux -- common/autotest_common.sh@978 -- # wait 86199 00:22:43.200 ************************************ 00:22:43.200 END TEST keyring_linux 00:22:43.200 ************************************ 00:22:43.200 00:22:43.200 real 0m6.234s 00:22:43.200 user 0m12.153s 00:22:43.200 sys 0m1.591s 00:22:43.200 16:10:41 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.200 16:10:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:43.200 16:10:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:43.200 16:10:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:43.200 16:10:41 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:43.200 16:10:41 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:43.200 16:10:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:43.200 16:10:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:43.200 16:10:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:43.200 16:10:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:43.200 16:10:41 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:22:43.200 16:10:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:43.200 16:10:41 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:22:43.200 16:10:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:43.200 16:10:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:43.200 16:10:41 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:22:43.200 16:10:41 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:22:43.200 16:10:41 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:22:43.200 16:10:41 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:22:43.200 16:10:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.200 16:10:41 -- common/autotest_common.sh@10 -- # set +x 00:22:43.200 16:10:41 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:22:43.200 16:10:41 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:22:43.200 16:10:41 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:22:43.200 16:10:41 -- common/autotest_common.sh@10 -- # set +x 00:22:45.178 INFO: APP EXITING 00:22:45.178 INFO: killing all VMs 00:22:45.178 INFO: killing vhost app 00:22:45.178 INFO: EXIT DONE 00:22:45.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:46.001 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:46.002 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:46.569 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:46.569 Cleaning 00:22:46.569 Removing: /var/run/dpdk/spdk0/config 00:22:46.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:46.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:46.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:46.569 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:46.569 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:46.569 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:46.569 Removing: /var/run/dpdk/spdk1/config 00:22:46.569 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:46.569 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:46.569 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:46.569 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:46.569 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:46.569 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:46.569 Removing: /var/run/dpdk/spdk2/config 00:22:46.569 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:46.569 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:46.569 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:46.569 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:46.569 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:46.569 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:46.569 Removing: /var/run/dpdk/spdk3/config 00:22:46.569 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:46.569 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:46.569 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:46.569 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:46.569 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:46.569 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:46.569 Removing: /var/run/dpdk/spdk4/config 00:22:46.569 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:46.569 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:46.569 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:46.569 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:46.569 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:46.569 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:46.569 Removing: /dev/shm/nvmf_trace.0 00:22:46.829 Removing: /dev/shm/spdk_tgt_trace.pid56958 00:22:46.829 Removing: /var/run/dpdk/spdk0 00:22:46.829 Removing: /var/run/dpdk/spdk1 00:22:46.829 Removing: /var/run/dpdk/spdk2 00:22:46.829 Removing: /var/run/dpdk/spdk3 00:22:46.829 Removing: /var/run/dpdk/spdk4 00:22:46.829 Removing: /var/run/dpdk/spdk_pid56805 00:22:46.829 Removing: /var/run/dpdk/spdk_pid56958 00:22:46.829 Removing: /var/run/dpdk/spdk_pid57164 00:22:46.829 Removing: /var/run/dpdk/spdk_pid57250 00:22:46.829 Removing: /var/run/dpdk/spdk_pid57278 00:22:46.829 Removing: /var/run/dpdk/spdk_pid57387 00:22:46.829 Removing: /var/run/dpdk/spdk_pid57411 00:22:46.829 Removing: /var/run/dpdk/spdk_pid57545 00:22:46.829 Removing: /var/run/dpdk/spdk_pid57746 00:22:46.829 Removing: /var/run/dpdk/spdk_pid57900 00:22:46.829 Removing: /var/run/dpdk/spdk_pid57978 00:22:46.829 Removing: /var/run/dpdk/spdk_pid58049 00:22:46.829 Removing: /var/run/dpdk/spdk_pid58146 00:22:46.829 Removing: /var/run/dpdk/spdk_pid58218 00:22:46.829 Removing: /var/run/dpdk/spdk_pid58256 00:22:46.829 Removing: /var/run/dpdk/spdk_pid58292 00:22:46.829 Removing: /var/run/dpdk/spdk_pid58356 00:22:46.829 Removing: /var/run/dpdk/spdk_pid58461 00:22:46.829 Removing: /var/run/dpdk/spdk_pid58905 00:22:46.829 Removing: /var/run/dpdk/spdk_pid58950 00:22:46.829 Removing: /var/run/dpdk/spdk_pid58993 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59002 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59069 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59077 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59144 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59160 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59206 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59228 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59269 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59287 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59418 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59453 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59536 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59868 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59881 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59912 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59931 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59952 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59971 00:22:46.829 Removing: /var/run/dpdk/spdk_pid59979 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60000 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60019 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60040 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60050 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60079 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60088 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60109 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60128 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60146 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60157 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60176 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60195 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60215 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60241 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60260 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60296 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60365 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60393 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60408 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60437 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60446 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60454 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60496 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60510 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60538 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60553 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60563 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60572 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60582 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60591 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60601 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60610 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60639 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60671 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60675 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60709 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60718 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60726 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60774 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60780 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60812 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60818 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60827 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60834 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60842 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60855 00:22:46.829 Removing: /var/run/dpdk/spdk_pid60857 00:22:47.088 Removing: /var/run/dpdk/spdk_pid60870 00:22:47.088 Removing: /var/run/dpdk/spdk_pid60952 00:22:47.088 Removing: /var/run/dpdk/spdk_pid60994 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61112 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61151 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61196 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61212 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61227 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61247 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61284 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61300 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61378 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61399 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61456 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61522 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61578 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61609 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61712 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61749 00:22:47.088 Removing: /var/run/dpdk/spdk_pid61787 00:22:47.088 Removing: /var/run/dpdk/spdk_pid62019 00:22:47.088 Removing: /var/run/dpdk/spdk_pid62111 00:22:47.088 Removing: /var/run/dpdk/spdk_pid62145 00:22:47.088 Removing: /var/run/dpdk/spdk_pid62169 00:22:47.088 Removing: /var/run/dpdk/spdk_pid62209 00:22:47.088 Removing: /var/run/dpdk/spdk_pid62246 00:22:47.088 Removing: /var/run/dpdk/spdk_pid62275 00:22:47.088 Removing: /var/run/dpdk/spdk_pid62312 00:22:47.088 Removing: /var/run/dpdk/spdk_pid62702 00:22:47.088 Removing: /var/run/dpdk/spdk_pid62740 00:22:47.088 Removing: /var/run/dpdk/spdk_pid63081 00:22:47.088 Removing: /var/run/dpdk/spdk_pid63551 00:22:47.088 Removing: /var/run/dpdk/spdk_pid63823 00:22:47.088 Removing: /var/run/dpdk/spdk_pid64730 00:22:47.088 Removing: /var/run/dpdk/spdk_pid65655 00:22:47.088 Removing: /var/run/dpdk/spdk_pid65772 00:22:47.088 Removing: /var/run/dpdk/spdk_pid65840 00:22:47.088 Removing: /var/run/dpdk/spdk_pid67248 00:22:47.088 Removing: /var/run/dpdk/spdk_pid67568 00:22:47.088 Removing: /var/run/dpdk/spdk_pid71352 00:22:47.088 Removing: /var/run/dpdk/spdk_pid71729 00:22:47.088 Removing: /var/run/dpdk/spdk_pid71834 00:22:47.088 Removing: /var/run/dpdk/spdk_pid71962 00:22:47.088 Removing: /var/run/dpdk/spdk_pid71983 00:22:47.088 Removing: /var/run/dpdk/spdk_pid72017 00:22:47.088 Removing: /var/run/dpdk/spdk_pid72038 00:22:47.088 Removing: /var/run/dpdk/spdk_pid72136 00:22:47.088 Removing: /var/run/dpdk/spdk_pid72265 00:22:47.088 Removing: /var/run/dpdk/spdk_pid72420 00:22:47.088 Removing: /var/run/dpdk/spdk_pid72507 00:22:47.088 Removing: /var/run/dpdk/spdk_pid72707 00:22:47.088 Removing: /var/run/dpdk/spdk_pid72777 00:22:47.088 Removing: /var/run/dpdk/spdk_pid72870 00:22:47.088 Removing: /var/run/dpdk/spdk_pid73230 00:22:47.088 Removing: /var/run/dpdk/spdk_pid73656 00:22:47.088 Removing: /var/run/dpdk/spdk_pid73657 00:22:47.088 Removing: /var/run/dpdk/spdk_pid73658 00:22:47.088 Removing: /var/run/dpdk/spdk_pid73920 00:22:47.089 Removing: /var/run/dpdk/spdk_pid74183 00:22:47.089 Removing: /var/run/dpdk/spdk_pid74564 00:22:47.089 Removing: /var/run/dpdk/spdk_pid74566 00:22:47.089 Removing: /var/run/dpdk/spdk_pid74892 00:22:47.089 Removing: /var/run/dpdk/spdk_pid74910 00:22:47.089 Removing: /var/run/dpdk/spdk_pid74931 00:22:47.089 Removing: /var/run/dpdk/spdk_pid74956 00:22:47.089 Removing: /var/run/dpdk/spdk_pid74961 00:22:47.089 Removing: /var/run/dpdk/spdk_pid75327 00:22:47.089 Removing: /var/run/dpdk/spdk_pid75370 00:22:47.089 Removing: /var/run/dpdk/spdk_pid75702 00:22:47.089 Removing: /var/run/dpdk/spdk_pid75899 00:22:47.089 Removing: /var/run/dpdk/spdk_pid76344 00:22:47.089 Removing: /var/run/dpdk/spdk_pid76900 00:22:47.089 Removing: /var/run/dpdk/spdk_pid77820 00:22:47.089 Removing: /var/run/dpdk/spdk_pid78465 00:22:47.089 Removing: /var/run/dpdk/spdk_pid78467 00:22:47.089 Removing: /var/run/dpdk/spdk_pid80506 00:22:47.089 Removing: /var/run/dpdk/spdk_pid80568 00:22:47.089 Removing: /var/run/dpdk/spdk_pid80614 00:22:47.089 Removing: /var/run/dpdk/spdk_pid80681 00:22:47.089 Removing: /var/run/dpdk/spdk_pid80793 00:22:47.089 Removing: /var/run/dpdk/spdk_pid80849 00:22:47.089 Removing: /var/run/dpdk/spdk_pid80902 00:22:47.089 Removing: /var/run/dpdk/spdk_pid80962 00:22:47.089 Removing: /var/run/dpdk/spdk_pid81340 00:22:47.089 Removing: /var/run/dpdk/spdk_pid82557 00:22:47.089 Removing: /var/run/dpdk/spdk_pid82696 00:22:47.089 Removing: /var/run/dpdk/spdk_pid82943 00:22:47.089 Removing: /var/run/dpdk/spdk_pid83545 00:22:47.089 Removing: /var/run/dpdk/spdk_pid83705 00:22:47.089 Removing: /var/run/dpdk/spdk_pid83856 00:22:47.089 Removing: /var/run/dpdk/spdk_pid83959 00:22:47.089 Removing: /var/run/dpdk/spdk_pid84121 00:22:47.347 Removing: /var/run/dpdk/spdk_pid84236 00:22:47.347 Removing: /var/run/dpdk/spdk_pid84943 00:22:47.347 Removing: /var/run/dpdk/spdk_pid84980 00:22:47.347 Removing: /var/run/dpdk/spdk_pid85019 00:22:47.347 Removing: /var/run/dpdk/spdk_pid85269 00:22:47.347 Removing: /var/run/dpdk/spdk_pid85304 00:22:47.347 Removing: /var/run/dpdk/spdk_pid85338 00:22:47.347 Removing: /var/run/dpdk/spdk_pid85808 00:22:47.347 Removing: /var/run/dpdk/spdk_pid85819 00:22:47.347 Removing: /var/run/dpdk/spdk_pid86073 00:22:47.347 Removing: /var/run/dpdk/spdk_pid86199 00:22:47.347 Removing: /var/run/dpdk/spdk_pid86217 00:22:47.347 Clean 00:22:47.347 16:10:45 -- common/autotest_common.sh@1453 -- # return 0 00:22:47.347 16:10:45 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:22:47.347 16:10:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:47.347 16:10:45 -- common/autotest_common.sh@10 -- # set +x 00:22:47.347 16:10:45 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:22:47.347 16:10:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:47.347 16:10:45 -- common/autotest_common.sh@10 -- # set +x 00:22:47.347 16:10:45 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:47.347 16:10:45 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:47.347 16:10:45 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:47.347 16:10:45 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:22:47.347 16:10:45 -- spdk/autotest.sh@398 -- # hostname 00:22:47.347 16:10:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:47.606 geninfo: WARNING: invalid characters removed from testname! 00:23:19.679 16:11:13 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:19.679 16:11:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:22.209 16:11:20 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:25.495 16:11:23 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:28.030 16:11:25 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:30.587 16:11:28 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:33.875 16:11:31 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:33.875 16:11:31 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:33.875 16:11:31 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:23:33.875 16:11:31 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:33.875 16:11:31 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:33.875 16:11:31 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:33.875 + [[ -n 5372 ]] 00:23:33.875 + sudo kill 5372 00:23:33.884 [Pipeline] } 00:23:33.901 [Pipeline] // timeout 00:23:33.907 [Pipeline] } 00:23:33.922 [Pipeline] // stage 00:23:33.927 [Pipeline] } 00:23:33.942 [Pipeline] // catchError 00:23:33.953 [Pipeline] stage 00:23:33.955 [Pipeline] { (Stop VM) 00:23:33.968 [Pipeline] sh 00:23:34.248 + vagrant halt 00:23:38.465 ==> default: Halting domain... 00:23:45.037 [Pipeline] sh 00:23:45.319 + vagrant destroy -f 00:23:49.508 ==> default: Removing domain... 00:23:49.520 [Pipeline] sh 00:23:49.798 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest_2/output 00:23:49.806 [Pipeline] } 00:23:49.823 [Pipeline] // stage 00:23:49.830 [Pipeline] } 00:23:49.847 [Pipeline] // dir 00:23:49.853 [Pipeline] } 00:23:49.869 [Pipeline] // wrap 00:23:49.875 [Pipeline] } 00:23:49.888 [Pipeline] // catchError 00:23:49.897 [Pipeline] stage 00:23:49.900 [Pipeline] { (Epilogue) 00:23:49.914 [Pipeline] sh 00:23:50.242 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:58.429 [Pipeline] catchError 00:23:58.431 [Pipeline] { 00:23:58.443 [Pipeline] sh 00:23:58.781 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:58.781 Artifacts sizes are good 00:23:58.790 [Pipeline] } 00:23:58.804 [Pipeline] // catchError 00:23:58.815 [Pipeline] archiveArtifacts 00:23:58.822 Archiving artifacts 00:23:58.950 [Pipeline] cleanWs 00:23:58.962 [WS-CLEANUP] Deleting project workspace... 00:23:58.962 [WS-CLEANUP] Deferred wipeout is used... 00:23:58.968 [WS-CLEANUP] done 00:23:58.970 [Pipeline] } 00:23:58.987 [Pipeline] // stage 00:23:58.993 [Pipeline] } 00:23:59.007 [Pipeline] // node 00:23:59.013 [Pipeline] End of Pipeline 00:23:59.053 Finished: SUCCESS